2025-08-29 14:05:14.986992 | Job console starting 2025-08-29 14:05:14.996932 | Updating git repos 2025-08-29 14:05:15.059260 | Cloning repos into workspace 2025-08-29 14:05:15.285657 | Restoring repo states 2025-08-29 14:05:15.307399 | Merging changes 2025-08-29 14:05:15.307422 | Checking out repos 2025-08-29 14:05:15.561766 | Preparing playbooks 2025-08-29 14:05:16.225349 | Running Ansible setup 2025-08-29 14:05:20.399405 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 14:05:21.224949 | 2025-08-29 14:05:21.225132 | PLAY [Base pre] 2025-08-29 14:05:21.244360 | 2025-08-29 14:05:21.244529 | TASK [Setup log path fact] 2025-08-29 14:05:21.265132 | orchestrator | ok 2025-08-29 14:05:21.283542 | 2025-08-29 14:05:21.283691 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 14:05:21.313113 | orchestrator | ok 2025-08-29 14:05:21.324983 | 2025-08-29 14:05:21.325110 | TASK [emit-job-header : Print job information] 2025-08-29 14:05:21.365103 | # Job Information 2025-08-29 14:05:21.365368 | Ansible Version: 2.16.14 2025-08-29 14:05:21.365452 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-08-29 14:05:21.365493 | Pipeline: post 2025-08-29 14:05:21.365521 | Executor: 521e9411259a 2025-08-29 14:05:21.365541 | Triggered by: https://github.com/osism/testbed/commit/4170080bde3f8ebb424d0797e843b3d9d7dc2e22 2025-08-29 14:05:21.365563 | Event ID: 2bf5971c-84e1-11f0-922c-b0e8d7badef4 2025-08-29 14:05:21.375494 | 2025-08-29 14:05:21.375626 | LOOP [emit-job-header : Print node information] 2025-08-29 14:05:21.540461 | orchestrator | ok: 2025-08-29 14:05:21.540729 | orchestrator | # Node Information 2025-08-29 14:05:21.540765 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 14:05:21.540789 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 14:05:21.540811 | orchestrator | Username: zuul-testbed04 2025-08-29 14:05:21.540832 | orchestrator | Distro: Debian 12.11 2025-08-29 14:05:21.540861 | orchestrator | Provider: static-testbed 2025-08-29 14:05:21.540886 | orchestrator | Region: 2025-08-29 14:05:21.540907 | orchestrator | Label: testbed-orchestrator 2025-08-29 14:05:21.540926 | orchestrator | Product Name: OpenStack Nova 2025-08-29 14:05:21.540945 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 14:05:21.563738 | 2025-08-29 14:05:21.563885 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 14:05:22.065670 | orchestrator -> localhost | changed 2025-08-29 14:05:22.077456 | 2025-08-29 14:05:22.077629 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 14:05:23.339790 | orchestrator -> localhost | changed 2025-08-29 14:05:23.367184 | 2025-08-29 14:05:23.367341 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 14:05:23.705428 | orchestrator -> localhost | ok 2025-08-29 14:05:23.724993 | 2025-08-29 14:05:23.725152 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 14:05:23.761043 | orchestrator | ok 2025-08-29 14:05:23.778292 | orchestrator | included: /var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 14:05:23.787577 | 2025-08-29 14:05:23.787703 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 14:05:26.553415 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 14:05:26.553661 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/work/2a234aeae003479eb1e4b9822ba3cdf0_id_rsa 2025-08-29 14:05:26.553704 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/work/2a234aeae003479eb1e4b9822ba3cdf0_id_rsa.pub 2025-08-29 14:05:26.553732 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 14:05:26.553756 | orchestrator -> localhost | SHA256:uicL4577Veb73gXGakUkKHZDQPr/LB8/FPzPIp6GBIU zuul-build-sshkey 2025-08-29 14:05:26.553779 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 14:05:26.553813 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 14:05:26.553836 | orchestrator -> localhost | | .o+... . | 2025-08-29 14:05:26.553857 | orchestrator -> localhost | | .E = o | 2025-08-29 14:05:26.553877 | orchestrator -> localhost | | .. + . .. | 2025-08-29 14:05:26.553897 | orchestrator -> localhost | | .. oo | 2025-08-29 14:05:26.553916 | orchestrator -> localhost | | S.o =o | 2025-08-29 14:05:26.553941 | orchestrator -> localhost | | . =. +...| 2025-08-29 14:05:26.553961 | orchestrator -> localhost | | o . ..o.+. .o| 2025-08-29 14:05:26.553981 | orchestrator -> localhost | | . +.o. o=o=o.o| 2025-08-29 14:05:26.554001 | orchestrator -> localhost | | .=o++ .BO.oo | 2025-08-29 14:05:26.554022 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 14:05:26.554072 | orchestrator -> localhost | ok: Runtime: 0:00:01.885933 2025-08-29 14:05:26.561873 | 2025-08-29 14:05:26.561996 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 14:05:26.591224 | orchestrator | ok 2025-08-29 14:05:26.601454 | orchestrator | included: /var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 14:05:26.610980 | 2025-08-29 14:05:26.611106 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 14:05:26.635008 | orchestrator | skipping: Conditional result was False 2025-08-29 14:05:26.643488 | 2025-08-29 14:05:26.643609 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 14:05:27.358965 | orchestrator | changed 2025-08-29 14:05:27.376103 | 2025-08-29 14:05:27.376243 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 14:05:27.663473 | orchestrator | ok 2025-08-29 14:05:27.672889 | 2025-08-29 14:05:27.673016 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 14:05:28.056717 | orchestrator | ok 2025-08-29 14:05:28.064097 | 2025-08-29 14:05:28.064217 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 14:05:28.448812 | orchestrator | ok 2025-08-29 14:05:28.455294 | 2025-08-29 14:05:28.455441 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 14:05:28.479401 | orchestrator | skipping: Conditional result was False 2025-08-29 14:05:28.486084 | 2025-08-29 14:05:28.486189 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 14:05:29.022307 | orchestrator -> localhost | changed 2025-08-29 14:05:29.068061 | 2025-08-29 14:05:29.068275 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 14:05:29.432044 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/work/2a234aeae003479eb1e4b9822ba3cdf0_id_rsa (zuul-build-sshkey) 2025-08-29 14:05:29.432486 | orchestrator -> localhost | ok: Runtime: 0:00:00.013771 2025-08-29 14:05:29.442820 | 2025-08-29 14:05:29.442983 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 14:05:29.854830 | orchestrator | ok 2025-08-29 14:05:29.864141 | 2025-08-29 14:05:29.864288 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 14:05:29.899570 | orchestrator | skipping: Conditional result was False 2025-08-29 14:05:29.951911 | 2025-08-29 14:05:29.952039 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 14:05:30.345184 | orchestrator | ok 2025-08-29 14:05:30.359337 | 2025-08-29 14:05:30.359505 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 14:05:30.402369 | orchestrator | ok 2025-08-29 14:05:30.410732 | 2025-08-29 14:05:30.410891 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 14:05:30.744840 | orchestrator -> localhost | ok 2025-08-29 14:05:30.773142 | 2025-08-29 14:05:30.773308 | TASK [validate-host : Collect information about the host] 2025-08-29 14:05:31.965985 | orchestrator | ok 2025-08-29 14:05:31.980876 | 2025-08-29 14:05:31.981006 | TASK [validate-host : Sanitize hostname] 2025-08-29 14:05:32.039963 | orchestrator | ok 2025-08-29 14:05:32.046335 | 2025-08-29 14:05:32.046494 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 14:05:32.648946 | orchestrator -> localhost | changed 2025-08-29 14:05:32.658643 | 2025-08-29 14:05:32.659024 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 14:05:33.095267 | orchestrator | ok 2025-08-29 14:05:33.101254 | 2025-08-29 14:05:33.101372 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 14:05:33.705946 | orchestrator -> localhost | changed 2025-08-29 14:05:33.726620 | 2025-08-29 14:05:33.726750 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 14:05:34.004342 | orchestrator | ok 2025-08-29 14:05:34.014340 | 2025-08-29 14:05:34.014530 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 14:06:12.827286 | orchestrator | changed: 2025-08-29 14:06:12.827530 | orchestrator | .d..t...... src/ 2025-08-29 14:06:12.827567 | orchestrator | .d..t...... src/github.com/ 2025-08-29 14:06:12.827592 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 14:06:12.827614 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 14:06:12.827634 | orchestrator | RedHat.yml 2025-08-29 14:06:12.840952 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 14:06:12.840970 | orchestrator | RedHat.yml 2025-08-29 14:06:12.841023 | orchestrator | = 2.2.0"... 2025-08-29 14:06:25.275799 | orchestrator | 14:06:25.275 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 14:06:25.315774 | orchestrator | 14:06:25.315 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-08-29 14:06:25.523181 | orchestrator | 14:06:25.523 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 14:06:25.995078 | orchestrator | 14:06:25.994 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:06:26.079791 | orchestrator | 14:06:26.079 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 14:06:26.572970 | orchestrator | 14:06:26.572 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:06:26.654434 | orchestrator | 14:06:26.654 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 14:06:27.317321 | orchestrator | 14:06:27.317 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 14:06:27.317385 | orchestrator | 14:06:27.317 STDOUT terraform: Providers are signed by their developers. 2025-08-29 14:06:27.317393 | orchestrator | 14:06:27.317 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 14:06:27.317399 | orchestrator | 14:06:27.317 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 14:06:27.317423 | orchestrator | 14:06:27.317 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 14:06:27.317477 | orchestrator | 14:06:27.317 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 14:06:27.317532 | orchestrator | 14:06:27.317 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 14:06:27.317564 | orchestrator | 14:06:27.317 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 14:06:27.317593 | orchestrator | 14:06:27.317 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 14:06:27.317650 | orchestrator | 14:06:27.317 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 14:06:27.317705 | orchestrator | 14:06:27.317 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 14:06:27.317720 | orchestrator | 14:06:27.317 STDOUT terraform: should now work. 2025-08-29 14:06:27.317771 | orchestrator | 14:06:27.317 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 14:06:27.317816 | orchestrator | 14:06:27.317 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 14:06:27.317861 | orchestrator | 14:06:27.317 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 14:06:27.436155 | orchestrator | 14:06:27.435 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-08-29 14:06:27.436245 | orchestrator | 14:06:27.436 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 14:06:27.699584 | orchestrator | 14:06:27.699 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 14:06:27.699702 | orchestrator | 14:06:27.699 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 14:06:27.699809 | orchestrator | 14:06:27.699 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 14:06:27.699855 | orchestrator | 14:06:27.699 STDOUT terraform: for this configuration. 2025-08-29 14:06:27.834806 | orchestrator | 14:06:27.834 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-08-29 14:06:27.834903 | orchestrator | 14:06:27.834 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 14:06:27.934434 | orchestrator | 14:06:27.934 STDOUT terraform: ci.auto.tfvars 2025-08-29 14:06:28.021327 | orchestrator | 14:06:28.021 STDOUT terraform: default_custom.tf 2025-08-29 14:06:28.617766 | orchestrator | 14:06:28.617 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-08-29 14:06:29.607103 | orchestrator | 14:06:29.606 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 14:06:30.195657 | orchestrator | 14:06:30.195 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 14:06:30.436457 | orchestrator | 14:06:30.436 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 14:06:30.436537 | orchestrator | 14:06:30.436 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 14:06:30.436545 | orchestrator | 14:06:30.436 STDOUT terraform:  + create 2025-08-29 14:06:30.436595 | orchestrator | 14:06:30.436 STDOUT terraform:  <= read (data resources) 2025-08-29 14:06:30.436675 | orchestrator | 14:06:30.436 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 14:06:30.436852 | orchestrator | 14:06:30.436 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 14:06:30.436929 | orchestrator | 14:06:30.436 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:06:30.437011 | orchestrator | 14:06:30.436 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 14:06:30.437088 | orchestrator | 14:06:30.437 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:06:30.437161 | orchestrator | 14:06:30.437 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:06:30.437237 | orchestrator | 14:06:30.437 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:06:30.437315 | orchestrator | 14:06:30.437 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.437391 | orchestrator | 14:06:30.437 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.437467 | orchestrator | 14:06:30.437 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:06:30.437574 | orchestrator | 14:06:30.437 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:06:30.437625 | orchestrator | 14:06:30.437 STDOUT terraform:  + most_recent = true 2025-08-29 14:06:30.437701 | orchestrator | 14:06:30.437 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.437779 | orchestrator | 14:06:30.437 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:06:30.437854 | orchestrator | 14:06:30.437 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.437927 | orchestrator | 14:06:30.437 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:06:30.438007 | orchestrator | 14:06:30.437 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:06:30.438229 | orchestrator | 14:06:30.438 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:06:30.438306 | orchestrator | 14:06:30.438 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:06:30.438341 | orchestrator | 14:06:30.438 STDOUT terraform:  } 2025-08-29 14:06:30.438468 | orchestrator | 14:06:30.438 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 14:06:30.438564 | orchestrator | 14:06:30.438 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:06:30.438657 | orchestrator | 14:06:30.438 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 14:06:30.438731 | orchestrator | 14:06:30.438 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:06:30.438807 | orchestrator | 14:06:30.438 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:06:30.438881 | orchestrator | 14:06:30.438 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:06:30.438963 | orchestrator | 14:06:30.438 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.439040 | orchestrator | 14:06:30.438 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.439112 | orchestrator | 14:06:30.439 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:06:30.439187 | orchestrator | 14:06:30.439 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:06:30.439241 | orchestrator | 14:06:30.439 STDOUT terraform:  + most_recent = true 2025-08-29 14:06:30.439311 | orchestrator | 14:06:30.439 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.439384 | orchestrator | 14:06:30.439 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:06:30.439457 | orchestrator | 14:06:30.439 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.439547 | orchestrator | 14:06:30.439 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:06:30.439623 | orchestrator | 14:06:30.439 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:06:30.439696 | orchestrator | 14:06:30.439 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:06:30.439773 | orchestrator | 14:06:30.439 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:06:30.439807 | orchestrator | 14:06:30.439 STDOUT terraform:  } 2025-08-29 14:06:30.439883 | orchestrator | 14:06:30.439 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 14:06:30.439959 | orchestrator | 14:06:30.439 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 14:06:30.440055 | orchestrator | 14:06:30.439 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:06:30.440146 | orchestrator | 14:06:30.440 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:30.440238 | orchestrator | 14:06:30.440 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:30.440332 | orchestrator | 14:06:30.440 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:30.440428 | orchestrator | 14:06:30.440 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:30.440560 | orchestrator | 14:06:30.440 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:30.440622 | orchestrator | 14:06:30.440 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:30.440684 | orchestrator | 14:06:30.440 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:06:30.440747 | orchestrator | 14:06:30.440 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:06:30.440846 | orchestrator | 14:06:30.440 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 14:06:30.440938 | orchestrator | 14:06:30.440 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.440971 | orchestrator | 14:06:30.440 STDOUT terraform:  } 2025-08-29 14:06:30.441043 | orchestrator | 14:06:30.440 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 14:06:30.441113 | orchestrator | 14:06:30.441 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 14:06:30.441207 | orchestrator | 14:06:30.441 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:06:30.441299 | orchestrator | 14:06:30.441 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:30.441391 | orchestrator | 14:06:30.441 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:30.441481 | orchestrator | 14:06:30.441 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:30.441608 | orchestrator | 14:06:30.441 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:30.441697 | orchestrator | 14:06:30.441 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:30.441788 | orchestrator | 14:06:30.441 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:30.441852 | orchestrator | 14:06:30.441 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:06:30.441913 | orchestrator | 14:06:30.441 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:06:30.441996 | orchestrator | 14:06:30.441 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 14:06:30.442142 | orchestrator | 14:06:30.441 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.442178 | orchestrator | 14:06:30.442 STDOUT terraform:  } 2025-08-29 14:06:30.442244 | orchestrator | 14:06:30.442 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 14:06:30.442303 | orchestrator | 14:06:30.442 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 14:06:30.442401 | orchestrator | 14:06:30.442 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:06:30.442492 | orchestrator | 14:06:30.442 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:30.442605 | orchestrator | 14:06:30.442 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:30.442700 | orchestrator | 14:06:30.442 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:30.442793 | orchestrator | 14:06:30.442 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:30.442887 | orchestrator | 14:06:30.442 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:30.442978 | orchestrator | 14:06:30.442 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:30.443042 | orchestrator | 14:06:30.442 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:06:30.443119 | orchestrator | 14:06:30.443 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:06:30.443184 | orchestrator | 14:06:30.443 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 14:06:30.443282 | orchestrator | 14:06:30.443 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.443315 | orchestrator | 14:06:30.443 STDOUT terraform:  } 2025-08-29 14:06:30.443393 | orchestrator | 14:06:30.443 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 14:06:30.443473 | orchestrator | 14:06:30.443 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 14:06:30.443599 | orchestrator | 14:06:30.443 STDOUT terraform:  + content = (sensitive value) 2025-08-29 14:06:30.443690 | orchestrator | 14:06:30.443 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:30.443784 | orchestrator | 14:06:30.443 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:30.443868 | orchestrator | 14:06:30.443 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:30.443951 | orchestrator | 14:06:30.443 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:30.444034 | orchestrator | 14:06:30.443 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:30.444116 | orchestrator | 14:06:30.444 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:30.444170 | orchestrator | 14:06:30.444 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 14:06:30.444225 | orchestrator | 14:06:30.444 STDOUT terraform:  + file_permission = "0600" 2025-08-29 14:06:30.444300 | orchestrator | 14:06:30.444 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 14:06:30.444384 | orchestrator | 14:06:30.444 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.444413 | orchestrator | 14:06:30.444 STDOUT terraform:  } 2025-08-29 14:06:30.444484 | orchestrator | 14:06:30.444 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 14:06:30.444595 | orchestrator | 14:06:30.444 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 14:06:30.444647 | orchestrator | 14:06:30.444 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.444677 | orchestrator | 14:06:30.444 STDOUT terraform:  } 2025-08-29 14:06:30.444896 | orchestrator | 14:06:30.444 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 14:06:30.445014 | orchestrator | 14:06:30.444 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 14:06:30.445092 | orchestrator | 14:06:30.445 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.445139 | orchestrator | 14:06:30.445 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.445210 | orchestrator | 14:06:30.445 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.445281 | orchestrator | 14:06:30.445 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.445348 | orchestrator | 14:06:30.445 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.445433 | orchestrator | 14:06:30.445 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 14:06:30.445508 | orchestrator | 14:06:30.445 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.445563 | orchestrator | 14:06:30.445 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.445610 | orchestrator | 14:06:30.445 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.445660 | orchestrator | 14:06:30.445 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.445687 | orchestrator | 14:06:30.445 STDOUT terraform:  } 2025-08-29 14:06:30.445777 | orchestrator | 14:06:30.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 14:06:30.445865 | orchestrator | 14:06:30.445 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:30.445933 | orchestrator | 14:06:30.445 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.445978 | orchestrator | 14:06:30.445 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.446075 | orchestrator | 14:06:30.445 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.446146 | orchestrator | 14:06:30.446 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.446214 | orchestrator | 14:06:30.446 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.446300 | orchestrator | 14:06:30.446 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 14:06:30.446370 | orchestrator | 14:06:30.446 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.446410 | orchestrator | 14:06:30.446 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.446456 | orchestrator | 14:06:30.446 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.446504 | orchestrator | 14:06:30.446 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.446557 | orchestrator | 14:06:30.446 STDOUT terraform:  } 2025-08-29 14:06:30.446638 | orchestrator | 14:06:30.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 14:06:30.446727 | orchestrator | 14:06:30.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:30.446796 | orchestrator | 14:06:30.446 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.446843 | orchestrator | 14:06:30.446 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.446912 | orchestrator | 14:06:30.446 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450100 | orchestrator | 14:06:30.446 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.450158 | orchestrator | 14:06:30.447 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450168 | orchestrator | 14:06:30.447 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 14:06:30.450175 | orchestrator | 14:06:30.447 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450183 | orchestrator | 14:06:30.447 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.450190 | orchestrator | 14:06:30.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450197 | orchestrator | 14:06:30.447 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450204 | orchestrator | 14:06:30.447 STDOUT terraform:  } 2025-08-29 14:06:30.450211 | orchestrator | 14:06:30.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 14:06:30.450219 | orchestrator | 14:06:30.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:30.450225 | orchestrator | 14:06:30.447 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450241 | orchestrator | 14:06:30.447 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450248 | orchestrator | 14:06:30.447 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450255 | orchestrator | 14:06:30.447 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.450262 | orchestrator | 14:06:30.447 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450268 | orchestrator | 14:06:30.447 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 14:06:30.450275 | orchestrator | 14:06:30.447 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450282 | orchestrator | 14:06:30.447 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.450288 | orchestrator | 14:06:30.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450295 | orchestrator | 14:06:30.447 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450301 | orchestrator | 14:06:30.447 STDOUT terraform:  } 2025-08-29 14:06:30.450308 | orchestrator | 14:06:30.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 14:06:30.450315 | orchestrator | 14:06:30.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:30.450322 | orchestrator | 14:06:30.447 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450329 | orchestrator | 14:06:30.447 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450335 | orchestrator | 14:06:30.447 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450342 | orchestrator | 14:06:30.447 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.450349 | orchestrator | 14:06:30.447 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450367 | orchestrator | 14:06:30.447 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 14:06:30.450374 | orchestrator | 14:06:30.447 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450380 | orchestrator | 14:06:30.448 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.450387 | orchestrator | 14:06:30.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450394 | orchestrator | 14:06:30.448 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450400 | orchestrator | 14:06:30.448 STDOUT terraform:  } 2025-08-29 14:06:30.450407 | orchestrator | 14:06:30.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 14:06:30.450418 | orchestrator | 14:06:30.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:30.450436 | orchestrator | 14:06:30.448 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450443 | orchestrator | 14:06:30.448 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450450 | orchestrator | 14:06:30.448 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450456 | orchestrator | 14:06:30.448 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.450463 | orchestrator | 14:06:30.448 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450469 | orchestrator | 14:06:30.448 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 14:06:30.450476 | orchestrator | 14:06:30.448 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450483 | orchestrator | 14:06:30.448 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.450489 | orchestrator | 14:06:30.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450496 | orchestrator | 14:06:30.448 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450502 | orchestrator | 14:06:30.448 STDOUT terraform:  } 2025-08-29 14:06:30.450509 | orchestrator | 14:06:30.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 14:06:30.450532 | orchestrator | 14:06:30.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:30.450539 | orchestrator | 14:06:30.448 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450545 | orchestrator | 14:06:30.448 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450552 | orchestrator | 14:06:30.448 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450559 | orchestrator | 14:06:30.448 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.450565 | orchestrator | 14:06:30.448 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450572 | orchestrator | 14:06:30.448 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 14:06:30.450579 | orchestrator | 14:06:30.448 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450585 | orchestrator | 14:06:30.448 STDOUT terraform:  + size = 80 2025-08-29 14:06:30.450597 | orchestrator | 14:06:30.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450604 | orchestrator | 14:06:30.448 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450610 | orchestrator | 14:06:30.448 STDOUT terraform:  } 2025-08-29 14:06:30.450617 | orchestrator | 14:06:30.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 14:06:30.450624 | orchestrator | 14:06:30.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.450634 | orchestrator | 14:06:30.448 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450641 | orchestrator | 14:06:30.448 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450648 | orchestrator | 14:06:30.448 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450654 | orchestrator | 14:06:30.448 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450661 | orchestrator | 14:06:30.448 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 14:06:30.450667 | orchestrator | 14:06:30.448 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450674 | orchestrator | 14:06:30.449 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.450681 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450687 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450694 | orchestrator | 14:06:30.449 STDOUT terraform:  } 2025-08-29 14:06:30.450701 | orchestrator | 14:06:30.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 14:06:30.450707 | orchestrator | 14:06:30.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.450721 | orchestrator | 14:06:30.449 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450728 | orchestrator | 14:06:30.449 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450735 | orchestrator | 14:06:30.449 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450742 | orchestrator | 14:06:30.449 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450748 | orchestrator | 14:06:30.449 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 14:06:30.450755 | orchestrator | 14:06:30.449 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450764 | orchestrator | 14:06:30.449 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.450771 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450778 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450784 | orchestrator | 14:06:30.449 STDOUT terraform:  } 2025-08-29 14:06:30.450791 | orchestrator | 14:06:30.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 14:06:30.450798 | orchestrator | 14:06:30.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.450804 | orchestrator | 14:06:30.449 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450816 | orchestrator | 14:06:30.449 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450822 | orchestrator | 14:06:30.449 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450829 | orchestrator | 14:06:30.449 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450835 | orchestrator | 14:06:30.449 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 14:06:30.450842 | orchestrator | 14:06:30.449 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450848 | orchestrator | 14:06:30.449 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.450859 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450866 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450872 | orchestrator | 14:06:30.449 STDOUT terraform:  } 2025-08-29 14:06:30.450879 | orchestrator | 14:06:30.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 14:06:30.450885 | orchestrator | 14:06:30.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.450892 | orchestrator | 14:06:30.449 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450898 | orchestrator | 14:06:30.449 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450905 | orchestrator | 14:06:30.449 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450912 | orchestrator | 14:06:30.449 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.450918 | orchestrator | 14:06:30.449 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 14:06:30.450925 | orchestrator | 14:06:30.449 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.450931 | orchestrator | 14:06:30.449 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.450938 | orchestrator | 14:06:30.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.450945 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.450951 | orchestrator | 14:06:30.450 STDOUT terraform:  } 2025-08-29 14:06:30.450958 | orchestrator | 14:06:30.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 14:06:30.450964 | orchestrator | 14:06:30.450 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.450971 | orchestrator | 14:06:30.450 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.450985 | orchestrator | 14:06:30.450 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.450992 | orchestrator | 14:06:30.450 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.450999 | orchestrator | 14:06:30.450 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.451005 | orchestrator | 14:06:30.450 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 14:06:30.451012 | orchestrator | 14:06:30.450 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.451023 | orchestrator | 14:06:30.450 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.451030 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.451036 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.451043 | orchestrator | 14:06:30.450 STDOUT terraform:  } 2025-08-29 14:06:30.451050 | orchestrator | 14:06:30.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 14:06:30.451056 | orchestrator | 14:06:30.450 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.451063 | orchestrator | 14:06:30.450 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.451069 | orchestrator | 14:06:30.450 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.451076 | orchestrator | 14:06:30.450 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.451082 | orchestrator | 14:06:30.450 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.451089 | orchestrator | 14:06:30.450 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 14:06:30.451096 | orchestrator | 14:06:30.450 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.451106 | orchestrator | 14:06:30.450 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.451112 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.451119 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.451126 | orchestrator | 14:06:30.450 STDOUT terraform:  } 2025-08-29 14:06:30.451132 | orchestrator | 14:06:30.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 14:06:30.451139 | orchestrator | 14:06:30.450 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.451145 | orchestrator | 14:06:30.450 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.451152 | orchestrator | 14:06:30.450 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.451158 | orchestrator | 14:06:30.450 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.451165 | orchestrator | 14:06:30.450 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.451171 | orchestrator | 14:06:30.450 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 14:06:30.451178 | orchestrator | 14:06:30.450 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.451185 | orchestrator | 14:06:30.450 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.451191 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.451201 | orchestrator | 14:06:30.450 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.451208 | orchestrator | 14:06:30.450 STDOUT terraform:  } 2025-08-29 14:06:30.451215 | orchestrator | 14:06:30.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 14:06:30.451221 | orchestrator | 14:06:30.451 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.451232 | orchestrator | 14:06:30.451 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.451239 | orchestrator | 14:06:30.451 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.451245 | orchestrator | 14:06:30.451 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.451252 | orchestrator | 14:06:30.451 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.451261 | orchestrator | 14:06:30.451 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 14:06:30.451268 | orchestrator | 14:06:30.451 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.451274 | orchestrator | 14:06:30.451 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.451283 | orchestrator | 14:06:30.451 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.451292 | orchestrator | 14:06:30.451 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.451301 | orchestrator | 14:06:30.451 STDOUT terraform:  } 2025-08-29 14:06:30.451366 | orchestrator | 14:06:30.451 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 14:06:30.451406 | orchestrator | 14:06:30.451 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:30.451432 | orchestrator | 14:06:30.451 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:30.451446 | orchestrator | 14:06:30.451 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.451490 | orchestrator | 14:06:30.451 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.451584 | orchestrator | 14:06:30.451 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:30.451594 | orchestrator | 14:06:30.451 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 14:06:30.451642 | orchestrator | 14:06:30.451 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.451651 | orchestrator | 14:06:30.451 STDOUT terraform:  + size = 20 2025-08-29 14:06:30.451660 | orchestrator | 14:06:30.451 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:30.451669 | orchestrator | 14:06:30.451 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:30.451678 | orchestrator | 14:06:30.451 STDOUT terraform:  } 2025-08-29 14:06:30.451764 | orchestrator | 14:06:30.451 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 14:06:30.451776 | orchestrator | 14:06:30.451 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 14:06:30.451786 | orchestrator | 14:06:30.451 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.451827 | orchestrator | 14:06:30.451 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.451886 | orchestrator | 14:06:30.451 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.451895 | orchestrator | 14:06:30.451 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.451904 | orchestrator | 14:06:30.451 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.451919 | orchestrator | 14:06:30.451 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.451979 | orchestrator | 14:06:30.451 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.451990 | orchestrator | 14:06:30.451 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.452020 | orchestrator | 14:06:30.451 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 14:06:30.452046 | orchestrator | 14:06:30.452 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.452105 | orchestrator | 14:06:30.452 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.452116 | orchestrator | 14:06:30.452 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.452161 | orchestrator | 14:06:30.452 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.452171 | orchestrator | 14:06:30.452 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.452220 | orchestrator | 14:06:30.452 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.452229 | orchestrator | 14:06:30.452 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 14:06:30.452238 | orchestrator | 14:06:30.452 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.452268 | orchestrator | 14:06:30.452 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.452354 | orchestrator | 14:06:30.452 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.452363 | orchestrator | 14:06:30.452 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.452370 | orchestrator | 14:06:30.452 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.452378 | orchestrator | 14:06:30.452 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 14:06:30.452387 | orchestrator | 14:06:30.452 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.452455 | orchestrator | 14:06:30.452 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.452464 | orchestrator | 14:06:30.452 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.452471 | orchestrator | 14:06:30.452 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.452479 | orchestrator | 14:06:30.452 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.452530 | orchestrator | 14:06:30.452 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.452616 | orchestrator | 14:06:30.452 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.452625 | orchestrator | 14:06:30.452 STDOUT terraform:  } 2025-08-29 14:06:30.452632 | orchestrator | 14:06:30.452 STDOUT terraform:  + network { 2025-08-29 14:06:30.452639 | orchestrator | 14:06:30.452 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.452647 | orchestrator | 14:06:30.452 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.452656 | orchestrator | 14:06:30.452 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.452692 | orchestrator | 14:06:30.452 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.452707 | orchestrator | 14:06:30.452 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.452766 | orchestrator | 14:06:30.452 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.452786 | orchestrator | 14:06:30.452 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.452795 | orchestrator | 14:06:30.452 STDOUT terraform:  } 2025-08-29 14:06:30.452803 | orchestrator | 14:06:30.452 STDOUT terraform:  } 2025-08-29 14:06:30.452871 | orchestrator | 14:06:30.452 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 14:06:30.452880 | orchestrator | 14:06:30.452 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:30.452931 | orchestrator | 14:06:30.452 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.452949 | orchestrator | 14:06:30.452 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.452963 | orchestrator | 14:06:30.452 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.453005 | orchestrator | 14:06:30.452 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.453022 | orchestrator | 14:06:30.452 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.453036 | orchestrator | 14:06:30.453 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.453147 | orchestrator | 14:06:30.453 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.453156 | orchestrator | 14:06:30.453 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.453163 | orchestrator | 14:06:30.453 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:30.453170 | orchestrator | 14:06:30.453 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.453178 | orchestrator | 14:06:30.453 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.453236 | orchestrator | 14:06:30.453 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.453247 | orchestrator | 14:06:30.453 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.453292 | orchestrator | 14:06:30.453 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.453302 | orchestrator | 14:06:30.453 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.453377 | orchestrator | 14:06:30.453 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 14:06:30.453386 | orchestrator | 14:06:30.453 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.453395 | orchestrator | 14:06:30.453 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.453484 | orchestrator | 14:06:30.453 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.453493 | orchestrator | 14:06:30.453 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.453503 | orchestrator | 14:06:30.453 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.453535 | orchestrator | 14:06:30.453 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:30.453551 | orchestrator | 14:06:30.453 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.453579 | orchestrator | 14:06:30.453 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.453587 | orchestrator | 14:06:30.453 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.453651 | orchestrator | 14:06:30.453 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.453660 | orchestrator | 14:06:30.453 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.453669 | orchestrator | 14:06:30.453 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.453705 | orchestrator | 14:06:30.453 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.453713 | orchestrator | 14:06:30.453 STDOUT terraform:  } 2025-08-29 14:06:30.453722 | orchestrator | 14:06:30.453 STDOUT terraform:  + network { 2025-08-29 14:06:30.453731 | orchestrator | 14:06:30.453 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.453803 | orchestrator | 14:06:30.453 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.453812 | orchestrator | 14:06:30.453 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.453821 | orchestrator | 14:06:30.453 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.453862 | orchestrator | 14:06:30.453 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.453873 | orchestrator | 14:06:30.453 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.453940 | orchestrator | 14:06:30.453 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.453948 | orchestrator | 14:06:30.453 STDOUT terraform:  } 2025-08-29 14:06:30.453955 | orchestrator | 14:06:30.453 STDOUT terraform:  } 2025-08-29 14:06:30.453964 | orchestrator | 14:06:30.453 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 14:06:30.454039 | orchestrator | 14:06:30.453 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:30.454053 | orchestrator | 14:06:30.453 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.454095 | orchestrator | 14:06:30.454 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.454106 | orchestrator | 14:06:30.454 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.454148 | orchestrator | 14:06:30.454 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.454159 | orchestrator | 14:06:30.454 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.454238 | orchestrator | 14:06:30.454 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.454247 | orchestrator | 14:06:30.454 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.454256 | orchestrator | 14:06:30.454 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.454265 | orchestrator | 14:06:30.454 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:30.454289 | orchestrator | 14:06:30.454 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.454369 | orchestrator | 14:06:30.454 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.454385 | orchestrator | 14:06:30.454 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.454394 | orchestrator | 14:06:30.454 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.454473 | orchestrator | 14:06:30.454 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.454482 | orchestrator | 14:06:30.454 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.454489 | orchestrator | 14:06:30.454 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 14:06:30.454498 | orchestrator | 14:06:30.454 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.454587 | orchestrator | 14:06:30.454 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.454598 | orchestrator | 14:06:30.454 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.454605 | orchestrator | 14:06:30.454 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.454614 | orchestrator | 14:06:30.454 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.454654 | orchestrator | 14:06:30.454 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:30.454665 | orchestrator | 14:06:30.454 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.454707 | orchestrator | 14:06:30.454 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.454722 | orchestrator | 14:06:30.454 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.454731 | orchestrator | 14:06:30.454 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.454780 | orchestrator | 14:06:30.454 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.454791 | orchestrator | 14:06:30.454 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.454864 | orchestrator | 14:06:30.454 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.454874 | orchestrator | 14:06:30.454 STDOUT terraform:  } 2025-08-29 14:06:30.454881 | orchestrator | 14:06:30.454 STDOUT terraform:  + network { 2025-08-29 14:06:30.454888 | orchestrator | 14:06:30.454 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.454898 | orchestrator | 14:06:30.454 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.454907 | orchestrator | 14:06:30.454 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.454952 | orchestrator | 14:06:30.454 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.454964 | orchestrator | 14:06:30.454 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.455039 | orchestrator | 14:06:30.454 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.455049 | orchestrator | 14:06:30.454 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.455056 | orchestrator | 14:06:30.455 STDOUT terraform:  } 2025-08-29 14:06:30.455063 | orchestrator | 14:06:30.455 STDOUT terraform:  } 2025-08-29 14:06:30.455073 | orchestrator | 14:06:30.455 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 14:06:30.455147 | orchestrator | 14:06:30.455 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:30.455162 | orchestrator | 14:06:30.455 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.455203 | orchestrator | 14:06:30.455 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.455214 | orchestrator | 14:06:30.455 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.455267 | orchestrator | 14:06:30.455 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.455279 | orchestrator | 14:06:30.455 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.455286 | orchestrator | 14:06:30.455 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.455323 | orchestrator | 14:06:30.455 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.455433 | orchestrator | 14:06:30.455 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.455442 | orchestrator | 14:06:30.455 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:30.455448 | orchestrator | 14:06:30.455 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.455455 | orchestrator | 14:06:30.455 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.455464 | orchestrator | 14:06:30.455 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.455487 | orchestrator | 14:06:30.455 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.455655 | orchestrator | 14:06:30.455 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.455664 | orchestrator | 14:06:30.455 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.455671 | orchestrator | 14:06:30.455 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 14:06:30.455677 | orchestrator | 14:06:30.455 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.455683 | orchestrator | 14:06:30.455 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.455690 | orchestrator | 14:06:30.455 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.455699 | orchestrator | 14:06:30.455 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.455706 | orchestrator | 14:06:30.455 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.455787 | orchestrator | 14:06:30.455 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:30.455797 | orchestrator | 14:06:30.455 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.455803 | orchestrator | 14:06:30.455 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.455812 | orchestrator | 14:06:30.455 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.455826 | orchestrator | 14:06:30.455 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.455857 | orchestrator | 14:06:30.455 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.455872 | orchestrator | 14:06:30.455 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.455936 | orchestrator | 14:06:30.455 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.455950 | orchestrator | 14:06:30.455 STDOUT terraform:  } 2025-08-29 14:06:30.455959 | orchestrator | 14:06:30.455 STDOUT terraform:  + network { 2025-08-29 14:06:30.455966 | orchestrator | 14:06:30.455 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.455975 | orchestrator | 14:06:30.455 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.456278 | orchestrator | 14:06:30.455 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.456288 | orchestrator | 14:06:30.456 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.456294 | orchestrator | 14:06:30.456 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.456301 | orchestrator | 14:06:30.456 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.456307 | orchestrator | 14:06:30.456 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.456313 | orchestrator | 14:06:30.456 STDOUT terraform:  } 2025-08-29 14:06:30.456321 | orchestrator | 14:06:30.456 STDOUT terraform:  } 2025-08-29 14:06:30.456327 | orchestrator | 14:06:30.456 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 14:06:30.456334 | orchestrator | 14:06:30.456 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:30.456340 | orchestrator | 14:06:30.456 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.456347 | orchestrator | 14:06:30.456 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.456356 | orchestrator | 14:06:30.456 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.456363 | orchestrator | 14:06:30.456 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.456369 | orchestrator | 14:06:30.456 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.456375 | orchestrator | 14:06:30.456 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.456384 | orchestrator | 14:06:30.456 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.456419 | orchestrator | 14:06:30.456 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.456478 | orchestrator | 14:06:30.456 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:30.456487 | orchestrator | 14:06:30.456 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.456496 | orchestrator | 14:06:30.456 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.456561 | orchestrator | 14:06:30.456 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.456619 | orchestrator | 14:06:30.456 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.456628 | orchestrator | 14:06:30.456 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.456637 | orchestrator | 14:06:30.456 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.456709 | orchestrator | 14:06:30.456 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 14:06:30.456718 | orchestrator | 14:06:30.456 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.456732 | orchestrator | 14:06:30.456 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.456741 | orchestrator | 14:06:30.456 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.456841 | orchestrator | 14:06:30.456 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.456850 | orchestrator | 14:06:30.456 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.456860 | orchestrator | 14:06:30.456 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:30.456867 | orchestrator | 14:06:30.456 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.456876 | orchestrator | 14:06:30.456 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.456949 | orchestrator | 14:06:30.456 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.456958 | orchestrator | 14:06:30.456 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.456964 | orchestrator | 14:06:30.456 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.456973 | orchestrator | 14:06:30.456 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.457022 | orchestrator | 14:06:30.456 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.457031 | orchestrator | 14:06:30.457 STDOUT terraform:  } 2025-08-29 14:06:30.457040 | orchestrator | 14:06:30.457 STDOUT terraform:  + network { 2025-08-29 14:06:30.457047 | orchestrator | 14:06:30.457 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.457291 | orchestrator | 14:06:30.457 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.457300 | orchestrator | 14:06:30.457 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.457307 | orchestrator | 14:06:30.457 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.457313 | orchestrator | 14:06:30.457 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.457320 | orchestrator | 14:06:30.457 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.457326 | orchestrator | 14:06:30.457 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.457332 | orchestrator | 14:06:30.457 STDOUT terraform:  } 2025-08-29 14:06:30.457339 | orchestrator | 14:06:30.457 STDOUT terraform:  } 2025-08-29 14:06:30.457345 | orchestrator | 14:06:30.457 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 14:06:30.457352 | orchestrator | 14:06:30.457 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:30.457361 | orchestrator | 14:06:30.457 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.457367 | orchestrator | 14:06:30.457 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.457376 | orchestrator | 14:06:30.457 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.457465 | orchestrator | 14:06:30.457 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.457473 | orchestrator | 14:06:30.457 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.457486 | orchestrator | 14:06:30.457 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.457500 | orchestrator | 14:06:30.457 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.457509 | orchestrator | 14:06:30.457 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.457585 | orchestrator | 14:06:30.457 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:30.457594 | orchestrator | 14:06:30.457 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.457603 | orchestrator | 14:06:30.457 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.457644 | orchestrator | 14:06:30.457 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.457655 | orchestrator | 14:06:30.457 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.457699 | orchestrator | 14:06:30.457 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.457710 | orchestrator | 14:06:30.457 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.457759 | orchestrator | 14:06:30.457 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 14:06:30.457768 | orchestrator | 14:06:30.457 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.457830 | orchestrator | 14:06:30.457 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.457838 | orchestrator | 14:06:30.457 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.457847 | orchestrator | 14:06:30.457 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.457895 | orchestrator | 14:06:30.457 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.457956 | orchestrator | 14:06:30.457 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:30.457964 | orchestrator | 14:06:30.457 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.457970 | orchestrator | 14:06:30.457 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.457979 | orchestrator | 14:06:30.457 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.457987 | orchestrator | 14:06:30.457 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.458055 | orchestrator | 14:06:30.457 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.458067 | orchestrator | 14:06:30.458 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.458166 | orchestrator | 14:06:30.458 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.458174 | orchestrator | 14:06:30.458 STDOUT terraform:  } 2025-08-29 14:06:30.458181 | orchestrator | 14:06:30.458 STDOUT terraform:  + network { 2025-08-29 14:06:30.458187 | orchestrator | 14:06:30.458 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.458193 | orchestrator | 14:06:30.458 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.458202 | orchestrator | 14:06:30.458 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.458208 | orchestrator | 14:06:30.458 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.458328 | orchestrator | 14:06:30.458 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.458336 | orchestrator | 14:06:30.458 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.458343 | orchestrator | 14:06:30.458 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.458349 | orchestrator | 14:06:30.458 STDOUT terraform:  } 2025-08-29 14:06:30.458355 | orchestrator | 14:06:30.458 STDOUT terraform:  } 2025-08-29 14:06:30.458364 | orchestrator | 14:06:30.458 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 14:06:30.458372 | orchestrator | 14:06:30.458 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:30.458432 | orchestrator | 14:06:30.458 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:30.458442 | orchestrator | 14:06:30.458 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:30.458487 | orchestrator | 14:06:30.458 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:30.458576 | orchestrator | 14:06:30.458 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.458586 | orchestrator | 14:06:30.458 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:30.458592 | orchestrator | 14:06:30.458 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:30.458601 | orchestrator | 14:06:30.458 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:30.458609 | orchestrator | 14:06:30.458 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:30.458656 | orchestrator | 14:06:30.458 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:30.458669 | orchestrator | 14:06:30.458 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:30.458734 | orchestrator | 14:06:30.458 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:30.458742 | orchestrator | 14:06:30.458 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.458750 | orchestrator | 14:06:30.458 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:30.458797 | orchestrator | 14:06:30.458 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:30.458806 | orchestrator | 14:06:30.458 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:30.458886 | orchestrator | 14:06:30.458 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 14:06:30.458896 | orchestrator | 14:06:30.458 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:30.458905 | orchestrator | 14:06:30.458 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.458915 | orchestrator | 14:06:30.458 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:30.458925 | orchestrator | 14:06:30.458 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:30.458985 | orchestrator | 14:06:30.458 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:30.459051 | orchestrator | 14:06:30.458 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:30.459059 | orchestrator | 14:06:30.459 STDOUT terraform:  + block_device { 2025-08-29 14:06:30.459071 | orchestrator | 14:06:30.459 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:30.459078 | orchestrator | 14:06:30.459 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:30.459085 | orchestrator | 14:06:30.459 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:30.459122 | orchestrator | 14:06:30.459 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:30.459169 | orchestrator | 14:06:30.459 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:30.459179 | orchestrator | 14:06:30.459 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.459186 | orchestrator | 14:06:30.459 STDOUT terraform:  } 2025-08-29 14:06:30.459194 | orchestrator | 14:06:30.459 STDOUT terraform:  + network { 2025-08-29 14:06:30.459250 | orchestrator | 14:06:30.459 STDOUT terraform:  + access_network = false 2025-08-29 14:06:30.459257 | orchestrator | 14:06:30.459 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:30.459342 | orchestrator | 14:06:30.459 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:30.459350 | orchestrator | 14:06:30.459 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:30.459355 | orchestrator | 14:06:30.459 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:30.459367 | orchestrator | 14:06:30.459 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:30.459408 | orchestrator | 14:06:30.459 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:30.459415 | orchestrator | 14:06:30.459 STDOUT terraform:  } 2025-08-29 14:06:30.459421 | orchestrator | 14:06:30.459 STDOUT terraform:  } 2025-08-29 14:06:30.459428 | orchestrator | 14:06:30.459 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 14:06:30.459608 | orchestrator | 14:06:30.459 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 14:06:30.459617 | orchestrator | 14:06:30.459 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 14:06:30.459622 | orchestrator | 14:06:30.459 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.459628 | orchestrator | 14:06:30.459 STDOUT terraform:  + name = "testbed" 2025-08-29 14:06:30.459633 | orchestrator | 14:06:30.459 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:06:30.459639 | orchestrator | 14:06:30.459 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 14:06:30.459646 | orchestrator | 14:06:30.459 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.459656 | orchestrator | 14:06:30.459 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 14:06:30.459661 | orchestrator | 14:06:30.459 STDOUT terraform:  } 2025-08-29 14:06:30.459714 | orchestrator | 14:06:30.459 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 14:06:30.459756 | orchestrator | 14:06:30.459 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.459765 | orchestrator | 14:06:30.459 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.459803 | orchestrator | 14:06:30.459 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.459821 | orchestrator | 14:06:30.459 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.459962 | orchestrator | 14:06:30.459 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.459970 | orchestrator | 14:06:30.459 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.459975 | orchestrator | 14:06:30.459 STDOUT terraform:  } 2025-08-29 14:06:30.459981 | orchestrator | 14:06:30.459 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 14:06:30.459987 | orchestrator | 14:06:30.459 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.459994 | orchestrator | 14:06:30.459 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.460001 | orchestrator | 14:06:30.459 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.460055 | orchestrator | 14:06:30.459 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.460063 | orchestrator | 14:06:30.460 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.460083 | orchestrator | 14:06:30.460 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.460095 | orchestrator | 14:06:30.460 STDOUT terraform:  } 2025-08-29 14:06:30.460165 | orchestrator | 14:06:30.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 14:06:30.460175 | orchestrator | 14:06:30.460 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.460234 | orchestrator | 14:06:30.460 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.460242 | orchestrator | 14:06:30.460 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.460249 | orchestrator | 14:06:30.460 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.460275 | orchestrator | 14:06:30.460 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.460317 | orchestrator | 14:06:30.460 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.460325 | orchestrator | 14:06:30.460 STDOUT terraform:  } 2025-08-29 14:06:30.460376 | orchestrator | 14:06:30.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 14:06:30.460416 | orchestrator | 14:06:30.460 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.460562 | orchestrator | 14:06:30.460 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.460570 | orchestrator | 14:06:30.460 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.460576 | orchestrator | 14:06:30.460 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.460581 | orchestrator | 14:06:30.460 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.460586 | orchestrator | 14:06:30.460 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.460592 | orchestrator | 14:06:30.460 STDOUT terraform:  } 2025-08-29 14:06:30.460599 | orchestrator | 14:06:30.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 14:06:30.460667 | orchestrator | 14:06:30.460 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.460674 | orchestrator | 14:06:30.460 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.460682 | orchestrator | 14:06:30.460 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.461034 | orchestrator | 14:06:30.460 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.461043 | orchestrator | 14:06:30.460 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.461048 | orchestrator | 14:06:30.460 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.461054 | orchestrator | 14:06:30.460 STDOUT terraform:  } 2025-08-29 14:06:30.461059 | orchestrator | 14:06:30.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 14:06:30.461065 | orchestrator | 14:06:30.460 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.461070 | orchestrator | 14:06:30.460 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.461075 | orchestrator | 14:06:30.460 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.461081 | orchestrator | 14:06:30.460 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.461086 | orchestrator | 14:06:30.460 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.461092 | orchestrator | 14:06:30.460 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.461097 | orchestrator | 14:06:30.460 STDOUT terraform:  } 2025-08-29 14:06:30.461103 | orchestrator | 14:06:30.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 14:06:30.461110 | orchestrator | 14:06:30.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.461116 | orchestrator | 14:06:30.461 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.461121 | orchestrator | 14:06:30.461 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.461128 | orchestrator | 14:06:30.461 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.461171 | orchestrator | 14:06:30.461 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.461180 | orchestrator | 14:06:30.461 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.461188 | orchestrator | 14:06:30.461 STDOUT terraform:  } 2025-08-29 14:06:30.461244 | orchestrator | 14:06:30.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 14:06:30.461367 | orchestrator | 14:06:30.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.461374 | orchestrator | 14:06:30.461 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.461380 | orchestrator | 14:06:30.461 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.461385 | orchestrator | 14:06:30.461 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.461393 | orchestrator | 14:06:30.461 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.461406 | orchestrator | 14:06:30.461 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.461413 | orchestrator | 14:06:30.461 STDOUT terraform:  } 2025-08-29 14:06:30.461466 | orchestrator | 14:06:30.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 14:06:30.461543 | orchestrator | 14:06:30.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:30.461555 | orchestrator | 14:06:30.461 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:30.461563 | orchestrator | 14:06:30.461 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.461600 | orchestrator | 14:06:30.461 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:30.461627 | orchestrator | 14:06:30.461 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.461640 | orchestrator | 14:06:30.461 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:30.461647 | orchestrator | 14:06:30.461 STDOUT terraform:  } 2025-08-29 14:06:30.461776 | orchestrator | 14:06:30.461 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 14:06:30.461785 | orchestrator | 14:06:30.461 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 14:06:30.461792 | orchestrator | 14:06:30.461 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:06:30.461799 | orchestrator | 14:06:30.461 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 14:06:30.461857 | orchestrator | 14:06:30.461 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.461868 | orchestrator | 14:06:30.461 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:06:30.461876 | orchestrator | 14:06:30.461 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.461883 | orchestrator | 14:06:30.461 STDOUT terraform:  } 2025-08-29 14:06:30.462054 | orchestrator | 14:06:30.461 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 14:06:30.462063 | orchestrator | 14:06:30.461 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 14:06:30.462068 | orchestrator | 14:06:30.461 STDOUT terraform:  + address = (known after apply) 2025-08-29 14:06:30.462074 | orchestrator | 14:06:30.461 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.462079 | orchestrator | 14:06:30.462 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:06:30.462087 | orchestrator | 14:06:30.462 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.462093 | orchestrator | 14:06:30.462 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:06:30.462100 | orchestrator | 14:06:30.462 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.462145 | orchestrator | 14:06:30.462 STDOUT terraform:  + pool = "public" 2025-08-29 14:06:30.462153 | orchestrator | 14:06:30.462 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:06:30.462160 | orchestrator | 14:06:30.462 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.462200 | orchestrator | 14:06:30.462 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.462218 | orchestrator | 14:06:30.462 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.462224 | orchestrator | 14:06:30.462 STDOUT terraform:  } 2025-08-29 14:06:30.462297 | orchestrator | 14:06:30.462 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 14:06:30.462306 | orchestrator | 14:06:30.462 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 14:06:30.462351 | orchestrator | 14:06:30.462 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.462399 | orchestrator | 14:06:30.462 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.462574 | orchestrator | 14:06:30.462 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:06:30.462582 | orchestrator | 14:06:30.462 STDOUT terraform:  + "nova", 2025-08-29 14:06:30.462590 | orchestrator | 14:06:30.462 STDOUT terraform:  ] 2025-08-29 14:06:30.462596 | orchestrator | 14:06:30.462 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:06:30.462601 | orchestrator | 14:06:30.462 STDOUT terraform:  + external = (known after apply) 2025-08-29 14:06:30.462607 | orchestrator | 14:06:30.462 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.462612 | orchestrator | 14:06:30.462 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 14:06:30.462617 | orchestrator | 14:06:30.462 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 14:06:30.462625 | orchestrator | 14:06:30.462 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.462632 | orchestrator | 14:06:30.462 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.463127 | orchestrator | 14:06:30.462 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.463139 | orchestrator | 14:06:30.462 STDOUT terraform:  + shared = (known after apply) 2025-08-29 14:06:30.463144 | orchestrator | 14:06:30.462 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.463149 | orchestrator | 14:06:30.462 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 14:06:30.463154 | orchestrator | 14:06:30.462 STDOUT terraform:  + segments (known after apply) 2025-08-29 14:06:30.463158 | orchestrator | 14:06:30.462 STDOUT terraform:  } 2025-08-29 14:06:30.463163 | orchestrator | 14:06:30.462 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 14:06:30.463168 | orchestrator | 14:06:30.462 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 14:06:30.463173 | orchestrator | 14:06:30.462 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.463180 | orchestrator | 14:06:30.462 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.463184 | orchestrator | 14:06:30.462 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.463189 | orchestrator | 14:06:30.462 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.463194 | orchestrator | 14:06:30.462 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.463203 | orchestrator | 14:06:30.463 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.463208 | orchestrator | 14:06:30.463 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.463212 | orchestrator | 14:06:30.463 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.463217 | orchestrator | 14:06:30.463 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.463222 | orchestrator | 14:06:30.463 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.463229 | orchestrator | 14:06:30.463 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.463501 | orchestrator | 14:06:30.463 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.463511 | orchestrator | 14:06:30.463 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.463526 | orchestrator | 14:06:30.463 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.463531 | orchestrator | 14:06:30.463 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.463536 | orchestrator | 14:06:30.463 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.463541 | orchestrator | 14:06:30.463 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.463546 | orchestrator | 14:06:30.463 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.463551 | orchestrator | 14:06:30.463 STDOUT terraform:  } 2025-08-29 14:06:30.463556 | orchestrator | 14:06:30.463 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.463561 | orchestrator | 14:06:30.463 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.463566 | orchestrator | 14:06:30.463 STDOUT terraform:  } 2025-08-29 14:06:30.463570 | orchestrator | 14:06:30.463 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.463575 | orchestrator | 14:06:30.463 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.463582 | orchestrator | 14:06:30.463 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 14:06:30.463587 | orchestrator | 14:06:30.463 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.463592 | orchestrator | 14:06:30.463 STDOUT terraform:  } 2025-08-29 14:06:30.463597 | orchestrator | 14:06:30.463 STDOUT terraform:  } 2025-08-29 14:06:30.463691 | orchestrator | 14:06:30.463 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 14:06:30.463797 | orchestrator | 14:06:30.463 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:30.463935 | orchestrator | 14:06:30.463 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.464078 | orchestrator | 14:06:30.463 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.464246 | orchestrator | 14:06:30.463 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.464447 | orchestrator | 14:06:30.463 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.464664 | orchestrator | 14:06:30.463 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.465063 | orchestrator | 14:06:30.463 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.465069 | orchestrator | 14:06:30.463 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.465078 | orchestrator | 14:06:30.463 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.465083 | orchestrator | 14:06:30.463 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.465091 | orchestrator | 14:06:30.463 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.465096 | orchestrator | 14:06:30.463 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.465101 | orchestrator | 14:06:30.464 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.465106 | orchestrator | 14:06:30.464 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.465110 | orchestrator | 14:06:30.464 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.465115 | orchestrator | 14:06:30.464 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.465119 | orchestrator | 14:06:30.464 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.465124 | orchestrator | 14:06:30.464 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465129 | orchestrator | 14:06:30.464 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.465133 | orchestrator | 14:06:30.464 STDOUT terraform:  } 2025-08-29 14:06:30.465138 | orchestrator | 14:06:30.464 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465142 | orchestrator | 14:06:30.464 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:30.465147 | orchestrator | 14:06:30.464 STDOUT terraform:  } 2025-08-29 14:06:30.465151 | orchestrator | 14:06:30.464 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465156 | orchestrator | 14:06:30.464 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.465161 | orchestrator | 14:06:30.464 STDOUT terraform:  } 2025-08-29 14:06:30.465166 | orchestrator | 14:06:30.464 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465170 | orchestrator | 14:06:30.464 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:30.465175 | orchestrator | 14:06:30.464 STDOUT terraform:  } 2025-08-29 14:06:30.465179 | orchestrator | 14:06:30.464 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.465184 | orchestrator | 14:06:30.464 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.465189 | orchestrator | 14:06:30.464 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 14:06:30.465193 | orchestrator | 14:06:30.464 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.465198 | orchestrator | 14:06:30.464 STDOUT terraform:  } 2025-08-29 14:06:30.465202 | orchestrator | 14:06:30.464 STDOUT terraform:  } 2025-08-29 14:06:30.465207 | orchestrator | 14:06:30.464 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 14:06:30.465212 | orchestrator | 14:06:30.464 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:30.465220 | orchestrator | 14:06:30.464 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.465225 | orchestrator | 14:06:30.464 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.465230 | orchestrator | 14:06:30.464 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.465237 | orchestrator | 14:06:30.464 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.465242 | orchestrator | 14:06:30.464 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.465246 | orchestrator | 14:06:30.464 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.465251 | orchestrator | 14:06:30.464 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.465255 | orchestrator | 14:06:30.464 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.465260 | orchestrator | 14:06:30.464 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.465270 | orchestrator | 14:06:30.464 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.465275 | orchestrator | 14:06:30.464 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.465280 | orchestrator | 14:06:30.464 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.465284 | orchestrator | 14:06:30.464 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.465289 | orchestrator | 14:06:30.464 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.465293 | orchestrator | 14:06:30.464 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.465298 | orchestrator | 14:06:30.465 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.465302 | orchestrator | 14:06:30.465 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465307 | orchestrator | 14:06:30.465 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.465312 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.465316 | orchestrator | 14:06:30.465 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465321 | orchestrator | 14:06:30.465 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:30.465325 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.465330 | orchestrator | 14:06:30.465 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465334 | orchestrator | 14:06:30.465 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.465339 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.465344 | orchestrator | 14:06:30.465 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.465348 | orchestrator | 14:06:30.465 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:30.465353 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.465357 | orchestrator | 14:06:30.465 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.465364 | orchestrator | 14:06:30.465 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.465372 | orchestrator | 14:06:30.465 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 14:06:30.465376 | orchestrator | 14:06:30.465 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.465381 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.465385 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.465392 | orchestrator | 14:06:30.465 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 14:06:30.465437 | orchestrator | 14:06:30.465 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:30.465491 | orchestrator | 14:06:30.465 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.465557 | orchestrator | 14:06:30.465 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.465581 | orchestrator | 14:06:30.465 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.465622 | orchestrator | 14:06:30.465 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.465656 | orchestrator | 14:06:30.465 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.465686 | orchestrator | 14:06:30.465 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.465708 | orchestrator | 14:06:30.465 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.465747 | orchestrator | 14:06:30.465 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.465769 | orchestrator | 14:06:30.465 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.465792 | orchestrator | 14:06:30.465 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.465824 | orchestrator | 14:06:30.465 STDOUT terraform:  + network_id = (known after 2025-08-29 14:06:30.465847 | orchestrator | 14:06:30.465 STDOUT terraform:  apply) 2025-08-29 14:06:30.465967 | orchestrator | 14:06:30.465 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.465989 | orchestrator | 14:06:30.465 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.466043 | orchestrator | 14:06:30.465 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.466067 | orchestrator | 14:06:30.465 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.466098 | orchestrator | 14:06:30.465 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.466158 | orchestrator | 14:06:30.465 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.466173 | orchestrator | 14:06:30.465 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.466204 | orchestrator | 14:06:30.465 STDOUT terraform:  } 2025-08-29 14:06:30.466226 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.466255 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:30.466276 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.466298 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.466344 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.466380 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.466422 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.466474 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:30.466504 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.466533 | orchestrator | 14:06:30.466 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.466560 | orchestrator | 14:06:30.466 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.466594 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 14:06:30.466641 | orchestrator | 14:06:30.466 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.466673 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.466724 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.466752 | orchestrator | 14:06:30.466 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 14:06:30.466776 | orchestrator | 14:06:30.466 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:30.466798 | orchestrator | 14:06:30.466 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.466829 | orchestrator | 14:06:30.466 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.466850 | orchestrator | 14:06:30.466 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.466872 | orchestrator | 14:06:30.466 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.466918 | orchestrator | 14:06:30.466 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.466949 | orchestrator | 14:06:30.466 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.466991 | orchestrator | 14:06:30.466 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.467013 | orchestrator | 14:06:30.466 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.467039 | orchestrator | 14:06:30.466 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.467045 | orchestrator | 14:06:30.466 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.467049 | orchestrator | 14:06:30.466 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.467090 | orchestrator | 14:06:30.466 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.467095 | orchestrator | 14:06:30.466 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.467100 | orchestrator | 14:06:30.466 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.467104 | orchestrator | 14:06:30.466 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.467123 | orchestrator | 14:06:30.466 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.467129 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.467174 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.467180 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.467184 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.467191 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:30.467196 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.467235 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.467248 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.467263 | orchestrator | 14:06:30.466 STDOUT terraform:  } 2025-08-29 14:06:30.467305 | orchestrator | 14:06:30.466 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.467311 | orchestrator | 14:06:30.466 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:30.467399 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.467486 | orchestrator | 14:06:30.467 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.467701 | orchestrator | 14:06:30.467 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.467708 | orchestrator | 14:06:30.467 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 14:06:30.467713 | orchestrator | 14:06:30.467 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.467727 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.467741 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.467813 | orchestrator | 14:06:30.467 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 14:06:30.467876 | orchestrator | 14:06:30.467 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:30.467893 | orchestrator | 14:06:30.467 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.467939 | orchestrator | 14:06:30.467 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.468038 | orchestrator | 14:06:30.467 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.468143 | orchestrator | 14:06:30.467 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.468242 | orchestrator | 14:06:30.467 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.468356 | orchestrator | 14:06:30.467 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.468404 | orchestrator | 14:06:30.467 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.468586 | orchestrator | 14:06:30.467 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.468713 | orchestrator | 14:06:30.467 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.468930 | orchestrator | 14:06:30.467 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.469134 | orchestrator | 14:06:30.467 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.469154 | orchestrator | 14:06:30.467 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.469193 | orchestrator | 14:06:30.467 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.469252 | orchestrator | 14:06:30.467 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.469463 | orchestrator | 14:06:30.467 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.469559 | orchestrator | 14:06:30.467 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.469663 | orchestrator | 14:06:30.467 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.469717 | orchestrator | 14:06:30.467 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.469811 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.469900 | orchestrator | 14:06:30.467 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.469966 | orchestrator | 14:06:30.467 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:30.470094 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.470290 | orchestrator | 14:06:30.467 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.470388 | orchestrator | 14:06:30.467 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.470410 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.470482 | orchestrator | 14:06:30.467 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.470647 | orchestrator | 14:06:30.467 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:30.470661 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.470670 | orchestrator | 14:06:30.467 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.470674 | orchestrator | 14:06:30.467 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.470741 | orchestrator | 14:06:30.467 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 14:06:30.470830 | orchestrator | 14:06:30.467 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.470921 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.470933 | orchestrator | 14:06:30.467 STDOUT terraform:  } 2025-08-29 14:06:30.470952 | orchestrator | 14:06:30.467 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 14:06:30.470957 | orchestrator | 14:06:30.468 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:30.470969 | orchestrator | 14:06:30.468 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.471069 | orchestrator | 14:06:30.468 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:30.471202 | orchestrator | 14:06:30.468 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:30.471372 | orchestrator | 14:06:30.468 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.471492 | orchestrator | 14:06:30.468 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:30.471505 | orchestrator | 14:06:30.468 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:30.471596 | orchestrator | 14:06:30.468 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:30.471686 | orchestrator | 14:06:30.468 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:30.471714 | orchestrator | 14:06:30.468 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.471727 | orchestrator | 14:06:30.468 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:30.471746 | orchestrator | 14:06:30.468 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.471751 | orchestrator | 14:06:30.468 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:30.471755 | orchestrator | 14:06:30.468 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:30.471770 | orchestrator | 14:06:30.468 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.471802 | orchestrator | 14:06:30.468 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:30.471852 | orchestrator | 14:06:30.468 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.471864 | orchestrator | 14:06:30.468 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.471882 | orchestrator | 14:06:30.468 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:30.471886 | orchestrator | 14:06:30.468 STDOUT terraform:  } 2025-08-29 14:06:30.471898 | orchestrator | 14:06:30.468 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.471941 | orchestrator | 14:06:30.468 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:30.471975 | orchestrator | 14:06:30.468 STDOUT terraform:  } 2025-08-29 14:06:30.471987 | orchestrator | 14:06:30.468 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.472028 | orchestrator | 14:06:30.468 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:30.472040 | orchestrator | 14:06:30.468 STDOUT terraform:  } 2025-08-29 14:06:30.472087 | orchestrator | 14:06:30.468 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:30.472100 | orchestrator | 14:06:30.468 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:30.472104 | orchestrator | 14:06:30.468 STDOUT terraform:  } 2025-08-29 14:06:30.472164 | orchestrator | 14:06:30.468 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:30.472223 | orchestrator | 14:06:30.468 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:30.472295 | orchestrator | 14:06:30.468 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 14:06:30.472453 | orchestrator | 14:06:30.468 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.472527 | orchestrator | 14:06:30.468 STDOUT terraform:  } 2025-08-29 14:06:30.472620 | orchestrator | 14:06:30.468 STDOUT terraform:  } 2025-08-29 14:06:30.472625 | orchestrator | 14:06:30.468 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 14:06:30.472629 | orchestrator | 14:06:30.468 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 14:06:30.472633 | orchestrator | 14:06:30.468 STDOUT terraform:  + force_destroy = false 2025-08-29 14:06:30.472721 | orchestrator | 14:06:30.468 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.472817 | orchestrator | 14:06:30.468 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:06:30.472822 | orchestrator | 14:06:30.468 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.472826 | orchestrator | 14:06:30.468 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 14:06:30.472830 | orchestrator | 14:06:30.468 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:30.472879 | orchestrator | 14:06:30.469 STDOUT terraform:  } 2025-08-29 14:06:30.472883 | orchestrator | 14:06:30.469 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 14:06:30.472887 | orchestrator | 14:06:30.469 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 14:06:30.472891 | orchestrator | 14:06:30.469 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:30.473015 | orchestrator | 14:06:30.469 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.473053 | orchestrator | 14:06:30.469 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:06:30.473057 | orchestrator | 14:06:30.469 STDOUT terraform:  + "nova", 2025-08-29 14:06:30.473060 | orchestrator | 14:06:30.469 STDOUT terraform:  ] 2025-08-29 14:06:30.473064 | orchestrator | 14:06:30.469 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 14:06:30.473112 | orchestrator | 14:06:30.469 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 14:06:30.473161 | orchestrator | 14:06:30.469 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 14:06:30.473168 | orchestrator | 14:06:30.469 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 14:06:30.473236 | orchestrator | 14:06:30.469 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.473321 | orchestrator | 14:06:30.469 STDOUT terraform:  + name = "testbed" 2025-08-29 14:06:30.473418 | orchestrator | 14:06:30.469 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.473532 | orchestrator | 14:06:30.469 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.473538 | orchestrator | 14:06:30.469 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 14:06:30.473555 | orchestrator | 14:06:30.469 STDOUT terraform:  } 2025-08-29 14:06:30.473559 | orchestrator | 14:06:30.469 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 14:06:30.473564 | orchestrator | 14:06:30.469 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 14:06:30.473568 | orchestrator | 14:06:30.469 STDOUT terraform:  + description = "ssh" 2025-08-29 14:06:30.473656 | orchestrator | 14:06:30.469 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.473704 | orchestrator | 14:06:30.469 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.473809 | orchestrator | 14:06:30.469 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.473870 | orchestrator | 14:06:30.469 STDOUT terraform:  + port_range_max = 22 2025-08-29 14:06:30.473984 | orchestrator | 14:06:30.469 STDOUT terraform:  + port_range_min = 22 2025-08-29 14:06:30.474063 | orchestrator | 14:06:30.469 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:06:30.474159 | orchestrator | 14:06:30.469 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.474217 | orchestrator | 14:06:30.469 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.474275 | orchestrator | 14:06:30.469 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.474331 | orchestrator | 14:06:30.469 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.474499 | orchestrator | 14:06:30.469 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.474597 | orchestrator | 14:06:30.469 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.475133 | orchestrator | 14:06:30.469 STDOUT terraform:  } 2025-08-29 14:06:30.475204 | orchestrator | 14:06:30.469 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 14:06:30.475310 | orchestrator | 14:06:30.469 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 14:06:30.475356 | orchestrator | 14:06:30.470 STDOUT terraform:  + description = "wireguard" 2025-08-29 14:06:30.475360 | orchestrator | 14:06:30.470 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.475364 | orchestrator | 14:06:30.470 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.475368 | orchestrator | 14:06:30.470 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.475463 | orchestrator | 14:06:30.470 STDOUT terraform:  + port_range_max = 51820 2025-08-29 14:06:30.475637 | orchestrator | 14:06:30.470 STDOUT terraform:  + port_range_min = 51820 2025-08-29 14:06:30.475691 | orchestrator | 14:06:30.470 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:06:30.475767 | orchestrator | 14:06:30.470 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.475859 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.475888 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.476019 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.476134 | orchestrator | 14:06:30.470 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.476268 | orchestrator | 14:06:30.470 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.476331 | orchestrator | 14:06:30.470 STDOUT terraform:  } 2025-08-29 14:06:30.476387 | orchestrator | 14:06:30.470 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 14:06:30.476437 | orchestrator | 14:06:30.470 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 14:06:30.476529 | orchestrator | 14:06:30.470 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.476636 | orchestrator | 14:06:30.470 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.476744 | orchestrator | 14:06:30.470 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.476823 | orchestrator | 14:06:30.470 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:06:30.476867 | orchestrator | 14:06:30.470 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.476956 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.476983 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.476987 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:06:30.476991 | orchestrator | 14:06:30.470 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.477056 | orchestrator | 14:06:30.470 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.477171 | orchestrator | 14:06:30.470 STDOUT terraform:  } 2025-08-29 14:06:30.477324 | orchestrator | 14:06:30.470 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 14:06:30.477395 | orchestrator | 14:06:30.470 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 14:06:30.477464 | orchestrator | 14:06:30.470 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.477597 | orchestrator | 14:06:30.470 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.477704 | orchestrator | 14:06:30.470 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.477888 | orchestrator | 14:06:30.470 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:06:30.477943 | orchestrator | 14:06:30.470 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.478055 | orchestrator | 14:06:30.470 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.478216 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.478379 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:06:30.478555 | orchestrator | 14:06:30.471 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.478624 | orchestrator | 14:06:30.471 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.478806 | orchestrator | 14:06:30.471 STDOUT terraform:  } 2025-08-29 14:06:30.478862 | orchestrator | 14:06:30.471 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 14:06:30.478976 | orchestrator | 14:06:30.471 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 14:06:30.479038 | orchestrator | 14:06:30.471 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.479129 | orchestrator | 14:06:30.471 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.479201 | orchestrator | 14:06:30.471 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479205 | orchestrator | 14:06:30.471 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:06:30.479264 | orchestrator | 14:06:30.471 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479496 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.479504 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.479508 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.479511 | orchestrator | 14:06:30.471 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.479525 | orchestrator | 14:06:30.471 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479529 | orchestrator | 14:06:30.471 STDOUT terraform:  } 2025-08-29 14:06:30.479533 | orchestrator | 14:06:30.471 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 14:06:30.479537 | orchestrator | 14:06:30.471 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 14:06:30.479541 | orchestrator | 14:06:30.471 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.479545 | orchestrator | 14:06:30.471 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.479548 | orchestrator | 14:06:30.471 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479552 | orchestrator | 14:06:30.471 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:06:30.479556 | orchestrator | 14:06:30.471 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479560 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.479568 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.479572 | orchestrator | 14:06:30.471 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.479576 | orchestrator | 14:06:30.471 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.479580 | orchestrator | 14:06:30.471 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479583 | orchestrator | 14:06:30.471 STDOUT terraform:  } 2025-08-29 14:06:30.479587 | orchestrator | 14:06:30.471 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 14:06:30.479591 | orchestrator | 14:06:30.471 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 14:06:30.479595 | orchestrator | 14:06:30.471 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.479598 | orchestrator | 14:06:30.472 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.479602 | orchestrator | 14:06:30.472 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479606 | orchestrator | 14:06:30.472 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:06:30.479610 | orchestrator | 14:06:30.472 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479613 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.479621 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.479625 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.479629 | orchestrator | 14:06:30.472 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.479632 | orchestrator | 14:06:30.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479636 | orchestrator | 14:06:30.472 STDOUT terraform:  } 2025-08-29 14:06:30.479640 | orchestrator | 14:06:30.472 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 14:06:30.479644 | orchestrator | 14:06:30.472 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 14:06:30.479647 | orchestrator | 14:06:30.472 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.479651 | orchestrator | 14:06:30.472 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.479655 | orchestrator | 14:06:30.472 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479659 | orchestrator | 14:06:30.472 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:06:30.479663 | orchestrator | 14:06:30.472 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479666 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.479670 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.479674 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.479677 | orchestrator | 14:06:30.472 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.479681 | orchestrator | 14:06:30.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479685 | orchestrator | 14:06:30.472 STDOUT terraform:  } 2025-08-29 14:06:30.479689 | orchestrator | 14:06:30.472 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 14:06:30.479693 | orchestrator | 14:06:30.472 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 14:06:30.479697 | orchestrator | 14:06:30.472 STDOUT terraform:  + description = "vrrp" 2025-08-29 14:06:30.479700 | orchestrator | 14:06:30.472 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:30.479707 | orchestrator | 14:06:30.472 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:30.479736 | orchestrator | 14:06:30.472 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479740 | orchestrator | 14:06:30.472 STDOUT terraform:  + protocol = "112" 2025-08-29 14:06:30.479744 | orchestrator | 14:06:30.472 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479748 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:30.479752 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:30.479755 | orchestrator | 14:06:30.472 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:30.479763 | orchestrator | 14:06:30.472 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:30.479767 | orchestrator | 14:06:30.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479770 | orchestrator | 14:06:30.472 STDOUT terraform:  } 2025-08-29 14:06:30.479774 | orchestrator | 14:06:30.473 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 14:06:30.479779 | orchestrator | 14:06:30.473 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 14:06:30.479782 | orchestrator | 14:06:30.473 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.479786 | orchestrator | 14:06:30.473 STDOUT terraform:  + description = "management security group" 2025-08-29 14:06:30.479790 | orchestrator | 14:06:30.473 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479794 | orchestrator | 14:06:30.473 STDOUT terraform:  + name = "testbed-management" 2025-08-29 14:06:30.479797 | orchestrator | 14:06:30.473 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479801 | orchestrator | 14:06:30.473 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:06:30.479805 | orchestrator | 14:06:30.473 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479808 | orchestrator | 14:06:30.473 STDOUT terraform:  } 2025-08-29 14:06:30.479812 | orchestrator | 14:06:30.473 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 14:06:30.479818 | orchestrator | 14:06:30.473 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 14:06:30.479822 | orchestrator | 14:06:30.473 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.479826 | orchestrator | 14:06:30.473 STDOUT terraform:  + description = "node security group" 2025-08-29 14:06:30.479830 | orchestrator | 14:06:30.473 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479833 | orchestrator | 14:06:30.473 STDOUT terraform:  + name = "testbed-node" 2025-08-29 14:06:30.479837 | orchestrator | 14:06:30.473 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479841 | orchestrator | 14:06:30.473 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:06:30.479844 | orchestrator | 14:06:30.473 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479848 | orchestrator | 14:06:30.473 STDOUT terraform:  } 2025-08-29 14:06:30.479852 | orchestrator | 14:06:30.473 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 14:06:30.479856 | orchestrator | 14:06:30.473 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 14:06:30.479859 | orchestrator | 14:06:30.473 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:30.479863 | orchestrator | 14:06:30.473 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 14:06:30.479867 | orchestrator | 14:06:30.473 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 14:06:30.479871 | orchestrator | 14:06:30.473 STDOUT terraform:  + "8.8.8.8", 2025-08-29 14:06:30.479877 | orchestrator | 14:06:30.473 STDOUT terraform:  + "9.9.9.9", 2025-08-29 14:06:30.479881 | orchestrator | 14:06:30.473 STDOUT terraform:  ] 2025-08-29 14:06:30.479887 | orchestrator | 14:06:30.473 STDOUT terraform:  + enable_dhcp = true 2025-08-29 14:06:30.479891 | orchestrator | 14:06:30.473 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 14:06:30.479895 | orchestrator | 14:06:30.473 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479898 | orchestrator | 14:06:30.473 STDOUT terraform:  + ip_version = 4 2025-08-29 14:06:30.479902 | orchestrator | 14:06:30.473 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 14:06:30.479906 | orchestrator | 14:06:30.473 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 14:06:30.479909 | orchestrator | 14:06:30.473 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 14:06:30.479913 | orchestrator | 14:06:30.473 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:30.479917 | orchestrator | 14:06:30.473 STDOUT terraform:  + no_gateway = false 2025-08-29 14:06:30.479921 | orchestrator | 14:06:30.473 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:30.479924 | orchestrator | 14:06:30.473 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 14:06:30.479928 | orchestrator | 14:06:30.473 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:30.479932 | orchestrator | 14:06:30.473 STDOUT terraform:  + allocation_pool { 2025-08-29 14:06:30.479936 | orchestrator | 14:06:30.474 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 14:06:30.479939 | orchestrator | 14:06:30.474 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 14:06:30.479943 | orchestrator | 14:06:30.474 STDOUT terraform:  } 2025-08-29 14:06:30.479947 | orchestrator | 14:06:30.474 STDOUT terraform:  } 2025-08-29 14:06:30.479951 | orchestrator | 14:06:30.474 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 14:06:30.479954 | orchestrator | 14:06:30.474 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 14:06:30.479958 | orchestrator | 14:06:30.474 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479962 | orchestrator | 14:06:30.474 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:06:30.479966 | orchestrator | 14:06:30.474 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:06:30.479969 | orchestrator | 14:06:30.474 STDOUT terraform:  } 2025-08-29 14:06:30.479976 | orchestrator | 14:06:30.474 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 14:06:30.479980 | orchestrator | 14:06:30.474 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 14:06:30.479983 | orchestrator | 14:06:30.474 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:30.479987 | orchestrator | 14:06:30.474 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:06:30.479991 | orchestrator | 14:06:30.474 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:06:30.479995 | orchestrator | 14:06:30.474 STDOUT terraform:  } 2025-08-29 14:06:30.479998 | orchestrator | 14:06:30.474 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 14:06:30.480005 | orchestrator | 14:06:30.474 STDOUT terraform: Changes to Outputs: 2025-08-29 14:06:30.480009 | orchestrator | 14:06:30.474 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 14:06:30.480012 | orchestrator | 14:06:30.474 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:06:30.666174 | orchestrator | 14:06:30.666 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 14:06:30.666237 | orchestrator | 14:06:30.666 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=592808c9-49b1-c370-820c-de140609ae78] 2025-08-29 14:06:30.666245 | orchestrator | 14:06:30.666 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 14:06:30.666251 | orchestrator | 14:06:30.666 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=ad560690-1203-f3d6-a403-802c57288f42] 2025-08-29 14:06:30.676082 | orchestrator | 14:06:30.675 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 14:06:30.684546 | orchestrator | 14:06:30.682 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 14:06:30.684594 | orchestrator | 14:06:30.682 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 14:06:30.684860 | orchestrator | 14:06:30.684 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 14:06:30.685653 | orchestrator | 14:06:30.685 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 14:06:30.686081 | orchestrator | 14:06:30.685 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 14:06:30.686771 | orchestrator | 14:06:30.686 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 14:06:30.690829 | orchestrator | 14:06:30.690 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 14:06:30.696345 | orchestrator | 14:06:30.696 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 14:06:30.696827 | orchestrator | 14:06:30.696 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 14:06:31.197130 | orchestrator | 14:06:31.196 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-08-29 14:06:31.210749 | orchestrator | 14:06:31.210 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 14:06:31.512164 | orchestrator | 14:06:31.511 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:06:31.522201 | orchestrator | 14:06:31.521 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 14:06:31.923233 | orchestrator | 14:06:31.923 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=69da1359-249c-43d6-a557-7bd6bd18c5ee] 2025-08-29 14:06:32.854649 | orchestrator | 14:06:31.927 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 14:06:32.854707 | orchestrator | 14:06:31.980 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:06:32.854720 | orchestrator | 14:06:31.990 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 14:06:32.854729 | orchestrator | 14:06:32.846 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 1s [id=ae091ffb551b54ae2b776ee4cb7f2d20808bd7ad] 2025-08-29 14:06:32.862935 | orchestrator | 14:06:32.862 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 14:06:32.867441 | orchestrator | 14:06:32.867 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=8f69110064bc7bcaea57a6ea78670c3b93ad99a9] 2025-08-29 14:06:32.871914 | orchestrator | 14:06:32.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 14:06:34.354408 | orchestrator | 14:06:34.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=e1c9c203-f31e-4d63-b484-525a06e6ccdf] 2025-08-29 14:06:34.358915 | orchestrator | 14:06:34.358 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 14:06:34.370561 | orchestrator | 14:06:34.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=5d364b5e-b4d2-47dd-94e6-90a734de67ba] 2025-08-29 14:06:34.375547 | orchestrator | 14:06:34.375 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 14:06:34.391081 | orchestrator | 14:06:34.390 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=d852edcf-4b4a-4ec3-af84-7b9722ba068a] 2025-08-29 14:06:34.396030 | orchestrator | 14:06:34.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 14:06:34.416595 | orchestrator | 14:06:34.415 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048] 2025-08-29 14:06:34.424213 | orchestrator | 14:06:34.422 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 14:06:34.426091 | orchestrator | 14:06:34.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=b4efbef4-6c99-40ea-a6ec-b5ce29198be8] 2025-08-29 14:06:34.435133 | orchestrator | 14:06:34.434 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 14:06:34.442412 | orchestrator | 14:06:34.442 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=f5a1fb11-e928-478a-853f-ace275f9637e] 2025-08-29 14:06:34.449009 | orchestrator | 14:06:34.448 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 14:06:34.472616 | orchestrator | 14:06:34.472 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=112d60a2-5c53-4704-85f0-10fd2a98c008] 2025-08-29 14:06:34.478283 | orchestrator | 14:06:34.478 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=ad4c9021-51cf-4f71-bbdb-17cb41c45166] 2025-08-29 14:06:34.482746 | orchestrator | 14:06:34.482 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 14:06:34.720575 | orchestrator | 14:06:34.720 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=828a38b3-3187-4328-b10e-4e827af3391d] 2025-08-29 14:06:35.353372 | orchestrator | 14:06:35.353 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=75ec5a5c-c859-4e2c-8ec3-b62771c00b6e] 2025-08-29 14:06:35.360928 | orchestrator | 14:06:35.360 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 14:06:36.257790 | orchestrator | 14:06:36.257 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=ce825d14-8cf3-46c0-a5a2-f8443242ecf0] 2025-08-29 14:06:37.769220 | orchestrator | 14:06:37.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=a1c20fb4-039a-44dc-b429-5b04d60e1d33] 2025-08-29 14:06:37.785491 | orchestrator | 14:06:37.785 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6fb25367-3937-467c-ae17-e945c0f5ac09] 2025-08-29 14:06:37.813081 | orchestrator | 14:06:37.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=b2c6c1a3-b680-4669-a768-f8e5d905aa15] 2025-08-29 14:06:37.823812 | orchestrator | 14:06:37.823 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=b128c3ba-6808-4295-9988-e02b5b112f5f] 2025-08-29 14:06:37.834719 | orchestrator | 14:06:37.834 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=d7906b42-e75b-4add-b229-0ba1b6d7cbfd] 2025-08-29 14:06:37.871143 | orchestrator | 14:06:37.870 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=0fdd78d4-9cb3-420c-a5c4-e71459b792f0] 2025-08-29 14:06:38.601492 | orchestrator | 14:06:38.601 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=3b3cf072-c9f4-4bf4-90bf-867a36e0eab5] 2025-08-29 14:06:38.605563 | orchestrator | 14:06:38.605 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 14:06:38.606226 | orchestrator | 14:06:38.606 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 14:06:38.607443 | orchestrator | 14:06:38.607 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 14:06:38.823358 | orchestrator | 14:06:38.822 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=32bc8999-9354-4264-ae8d-f79439c17509] 2025-08-29 14:06:38.844481 | orchestrator | 14:06:38.844 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 14:06:38.844687 | orchestrator | 14:06:38.844 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 14:06:38.847631 | orchestrator | 14:06:38.847 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 14:06:38.852192 | orchestrator | 14:06:38.851 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 14:06:38.853243 | orchestrator | 14:06:38.852 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 14:06:38.854767 | orchestrator | 14:06:38.854 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 14:06:38.855957 | orchestrator | 14:06:38.855 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 14:06:38.857217 | orchestrator | 14:06:38.857 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 14:06:38.882302 | orchestrator | 14:06:38.881 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=fbe8e4e5-720d-484b-9838-648529b10e94] 2025-08-29 14:06:38.887186 | orchestrator | 14:06:38.886 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 14:06:39.083013 | orchestrator | 14:06:39.082 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=2d6fe6a6-f148-4e32-a8a3-0ba699ed01ab] 2025-08-29 14:06:39.090674 | orchestrator | 14:06:39.090 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 14:06:39.315605 | orchestrator | 14:06:39.314 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=401fcaaf-fb30-4579-97e5-1c8ae2b13608] 2025-08-29 14:06:39.322943 | orchestrator | 14:06:39.322 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 14:06:39.533579 | orchestrator | 14:06:39.533 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=7a52b57d-afd8-4e85-9686-6389bfa5ffac] 2025-08-29 14:06:39.545431 | orchestrator | 14:06:39.545 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 14:06:39.601919 | orchestrator | 14:06:39.599 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=70680576-4fb9-424b-90d5-e4cbb344190a] 2025-08-29 14:06:39.606991 | orchestrator | 14:06:39.606 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 14:06:39.745327 | orchestrator | 14:06:39.744 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=b2368927-4666-4a63-a0bc-e05c306dd44b] 2025-08-29 14:06:39.751437 | orchestrator | 14:06:39.751 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 14:06:39.774384 | orchestrator | 14:06:39.773 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=7a01ad33-1eed-41b7-bebf-4b74cf218ee4] 2025-08-29 14:06:39.783373 | orchestrator | 14:06:39.783 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 14:06:39.812579 | orchestrator | 14:06:39.812 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=63436677-c07a-445c-bd70-35bd742e0e23] 2025-08-29 14:06:39.816781 | orchestrator | 14:06:39.816 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 14:06:39.824417 | orchestrator | 14:06:39.824 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=d06fe075-fb49-4ab8-be5e-06bd19f4cb9b] 2025-08-29 14:06:39.911879 | orchestrator | 14:06:39.911 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=014628a5-6e97-47fa-9659-3aae10c4984c] 2025-08-29 14:06:40.121248 | orchestrator | 14:06:40.120 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=b9512f6a-7eb8-4c93-b6a3-537fc329a0ac] 2025-08-29 14:06:40.174339 | orchestrator | 14:06:40.173 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=a0e0e667-69db-47a2-94f6-152df566ce00] 2025-08-29 14:06:40.314123 | orchestrator | 14:06:40.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3634a675-4296-4119-8651-4bac85070b25] 2025-08-29 14:06:40.353068 | orchestrator | 14:06:40.352 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=30bdfc66-3a33-4a4c-bc77-9e0d241dcd38] 2025-08-29 14:06:40.375145 | orchestrator | 14:06:40.374 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=b0cbb513-8b72-41da-bd83-87d6a2d7c1ce] 2025-08-29 14:06:40.560677 | orchestrator | 14:06:40.560 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=40105738-7264-4533-97f2-f7b6d2be07c0] 2025-08-29 14:06:40.773381 | orchestrator | 14:06:40.772 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=4cbe06cf-8b85-4e6c-a66e-4b1dc41162b4] 2025-08-29 14:06:41.153555 | orchestrator | 14:06:41.153 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=9ad48595-7405-49da-8156-2baa77055908] 2025-08-29 14:06:41.173445 | orchestrator | 14:06:41.173 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 14:06:41.186276 | orchestrator | 14:06:41.186 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 14:06:41.189194 | orchestrator | 14:06:41.189 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 14:06:41.192989 | orchestrator | 14:06:41.192 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 14:06:41.204637 | orchestrator | 14:06:41.204 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 14:06:41.206975 | orchestrator | 14:06:41.206 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 14:06:41.209184 | orchestrator | 14:06:41.209 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 14:06:42.578649 | orchestrator | 14:06:42.578 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=f432ac01-e2d5-475b-86b6-d29468140d80] 2025-08-29 14:06:42.784400 | orchestrator | 14:06:42.593 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 14:06:42.784469 | orchestrator | 14:06:42.597 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 14:06:42.784483 | orchestrator | 14:06:42.600 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 14:06:42.800119 | orchestrator | 14:06:42.799 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=2619c78197d4042d83a99d0673380523c21fe188] 2025-08-29 14:06:42.801493 | orchestrator | 14:06:42.801 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=1086dd72664e4f05e22b2ce9e1334b378b8a32ac] 2025-08-29 14:06:43.484150 | orchestrator | 14:06:43.483 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f432ac01-e2d5-475b-86b6-d29468140d80] 2025-08-29 14:06:51.189562 | orchestrator | 14:06:51.189 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 14:06:51.192672 | orchestrator | 14:06:51.192 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 14:06:51.200866 | orchestrator | 14:06:51.200 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 14:06:51.205600 | orchestrator | 14:06:51.205 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 14:06:51.207682 | orchestrator | 14:06:51.207 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 14:06:51.212828 | orchestrator | 14:06:51.212 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 14:07:01.190739 | orchestrator | 14:07:01.190 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 14:07:01.192818 | orchestrator | 14:07:01.192 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 14:07:01.201981 | orchestrator | 14:07:01.201 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 14:07:01.206205 | orchestrator | 14:07:01.206 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 14:07:01.208538 | orchestrator | 14:07:01.208 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 14:07:01.213744 | orchestrator | 14:07:01.213 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 14:07:11.191032 | orchestrator | 14:07:11.190 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 14:07:11.192879 | orchestrator | 14:07:11.192 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-08-29 14:07:11.202226 | orchestrator | 14:07:11.201 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-08-29 14:07:11.207311 | orchestrator | 14:07:11.207 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-08-29 14:07:11.209618 | orchestrator | 14:07:11.209 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-08-29 14:07:11.214829 | orchestrator | 14:07:11.214 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-08-29 14:07:12.282972 | orchestrator | 14:07:12.282 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=2673c674-f69c-4e44-899d-8b5757a9241f] 2025-08-29 14:07:12.431251 | orchestrator | 14:07:12.430 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=72b776d3-de3b-428d-b667-06f0ae4b53e5] 2025-08-29 14:07:21.193127 | orchestrator | 14:07:21.192 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-08-29 14:07:21.193181 | orchestrator | 14:07:21.193 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-08-29 14:07:21.210717 | orchestrator | 14:07:21.210 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-08-29 14:07:21.210760 | orchestrator | 14:07:21.210 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-08-29 14:07:21.957629 | orchestrator | 14:07:21.957 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=eb0db974-1ecf-4eca-8c0c-4ad96835726c] 2025-08-29 14:07:22.093408 | orchestrator | 14:07:22.093 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=065dbc31-e393-4935-9ecf-97ec1b396214] 2025-08-29 14:07:22.105618 | orchestrator | 14:07:22.105 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=48d0e79e-b210-49d4-b9e6-39c2a232b3a9] 2025-08-29 14:07:22.283691 | orchestrator | 14:07:22.283 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=b4ff2d79-1b50-4c39-b885-01c9fe3274ed] 2025-08-29 14:07:22.289441 | orchestrator | 14:07:22.289 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 14:07:22.318040 | orchestrator | 14:07:22.317 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4715620366342016606] 2025-08-29 14:07:22.336109 | orchestrator | 14:07:22.335 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 14:07:22.336260 | orchestrator | 14:07:22.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 14:07:22.338129 | orchestrator | 14:07:22.337 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 14:07:22.340299 | orchestrator | 14:07:22.340 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 14:07:22.355859 | orchestrator | 14:07:22.355 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 14:07:22.358440 | orchestrator | 14:07:22.358 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 14:07:22.370120 | orchestrator | 14:07:22.367 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 14:07:22.380917 | orchestrator | 14:07:22.379 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 14:07:22.387907 | orchestrator | 14:07:22.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 14:07:22.401491 | orchestrator | 14:07:22.400 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 14:07:25.705701 | orchestrator | 14:07:25.705 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=2673c674-f69c-4e44-899d-8b5757a9241f/112d60a2-5c53-4704-85f0-10fd2a98c008] 2025-08-29 14:07:25.715880 | orchestrator | 14:07:25.715 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=72b776d3-de3b-428d-b667-06f0ae4b53e5/d852edcf-4b4a-4ec3-af84-7b9722ba068a] 2025-08-29 14:07:25.734558 | orchestrator | 14:07:25.734 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=2673c674-f69c-4e44-899d-8b5757a9241f/aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048] 2025-08-29 14:07:25.750061 | orchestrator | 14:07:25.749 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=72b776d3-de3b-428d-b667-06f0ae4b53e5/e1c9c203-f31e-4d63-b484-525a06e6ccdf] 2025-08-29 14:07:25.778299 | orchestrator | 14:07:25.776 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=065dbc31-e393-4935-9ecf-97ec1b396214/5d364b5e-b4d2-47dd-94e6-90a734de67ba] 2025-08-29 14:07:25.808702 | orchestrator | 14:07:25.808 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=065dbc31-e393-4935-9ecf-97ec1b396214/f5a1fb11-e928-478a-853f-ace275f9637e] 2025-08-29 14:07:31.860184 | orchestrator | 14:07:31.859 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=2673c674-f69c-4e44-899d-8b5757a9241f/b4efbef4-6c99-40ea-a6ec-b5ce29198be8] 2025-08-29 14:07:31.937327 | orchestrator | 14:07:31.936 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=72b776d3-de3b-428d-b667-06f0ae4b53e5/ad4c9021-51cf-4f71-bbdb-17cb41c45166] 2025-08-29 14:07:31.946821 | orchestrator | 14:07:31.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=065dbc31-e393-4935-9ecf-97ec1b396214/828a38b3-3187-4328-b10e-4e827af3391d] 2025-08-29 14:07:32.382752 | orchestrator | 14:07:32.382 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 14:07:42.383772 | orchestrator | 14:07:42.383 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 14:07:42.810579 | orchestrator | 14:07:42.810 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=fed63c9e-5716-4de0-80dd-5ce167386298] 2025-08-29 14:07:42.840137 | orchestrator | 14:07:42.839 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 14:07:42.840219 | orchestrator | 14:07:42.839 STDOUT terraform: Outputs: 2025-08-29 14:07:42.840244 | orchestrator | 14:07:42.840 STDOUT terraform: manager_address = 2025-08-29 14:07:42.840257 | orchestrator | 14:07:42.840 STDOUT terraform: private_key = 2025-08-29 14:07:42.925760 | orchestrator | ok: Runtime: 0:01:17.987675 2025-08-29 14:07:42.963121 | 2025-08-29 14:07:42.963302 | TASK [Fetch manager address] 2025-08-29 14:07:43.403086 | orchestrator | ok 2025-08-29 14:07:43.414359 | 2025-08-29 14:07:43.414518 | TASK [Set manager_host address] 2025-08-29 14:07:43.496090 | orchestrator | ok 2025-08-29 14:07:43.505415 | 2025-08-29 14:07:43.505597 | LOOP [Update ansible collections] 2025-08-29 14:07:48.122525 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:07:48.122929 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:07:48.122983 | orchestrator | Starting galaxy collection install process 2025-08-29 14:07:48.123016 | orchestrator | Process install dependency map 2025-08-29 14:07:48.123043 | orchestrator | Starting collection install process 2025-08-29 14:07:48.123068 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 14:07:48.123098 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-08-29 14:07:48.123129 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 14:07:48.123191 | orchestrator | ok: Item: commons Runtime: 0:00:04.316664 2025-08-29 14:07:50.316342 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:07:50.316528 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:07:50.316584 | orchestrator | Starting galaxy collection install process 2025-08-29 14:07:50.316625 | orchestrator | Process install dependency map 2025-08-29 14:07:50.316663 | orchestrator | Starting collection install process 2025-08-29 14:07:50.316698 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-08-29 14:07:50.316749 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-08-29 14:07:50.316786 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 14:07:50.316840 | orchestrator | ok: Item: services Runtime: 0:00:01.961600 2025-08-29 14:07:50.337549 | 2025-08-29 14:07:50.337702 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:08:00.883309 | orchestrator | ok 2025-08-29 14:08:00.893857 | 2025-08-29 14:08:00.893984 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:09:00.947299 | orchestrator | ok 2025-08-29 14:09:00.957683 | 2025-08-29 14:09:00.957813 | TASK [Fetch manager ssh hostkey] 2025-08-29 14:09:02.537814 | orchestrator | Output suppressed because no_log was given 2025-08-29 14:09:02.554715 | 2025-08-29 14:09:02.554930 | TASK [Get ssh keypair from terraform environment] 2025-08-29 14:09:03.122690 | orchestrator | ok: Runtime: 0:00:00.010242 2025-08-29 14:09:03.139444 | 2025-08-29 14:09:03.139611 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:09:03.187518 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 14:09:03.197506 | 2025-08-29 14:09:03.197628 | TASK [Run manager part 0] 2025-08-29 14:09:05.287797 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:09:05.403475 | orchestrator | 2025-08-29 14:09:05.403557 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 14:09:05.403567 | orchestrator | 2025-08-29 14:09:05.403589 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 14:09:08.852133 | orchestrator | ok: [testbed-manager] 2025-08-29 14:09:08.852197 | orchestrator | 2025-08-29 14:09:08.852230 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:09:08.852245 | orchestrator | 2025-08-29 14:09:08.852259 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:09:10.995981 | orchestrator | ok: [testbed-manager] 2025-08-29 14:09:10.996091 | orchestrator | 2025-08-29 14:09:10.996101 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:09:11.678634 | orchestrator | ok: [testbed-manager] 2025-08-29 14:09:11.678727 | orchestrator | 2025-08-29 14:09:11.678746 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:09:11.729193 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.729248 | orchestrator | 2025-08-29 14:09:11.729258 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 14:09:11.764892 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.764994 | orchestrator | 2025-08-29 14:09:11.765002 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:09:11.801109 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.801167 | orchestrator | 2025-08-29 14:09:11.801173 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:09:11.831437 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.832145 | orchestrator | 2025-08-29 14:09:11.832160 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:09:11.869935 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.869987 | orchestrator | 2025-08-29 14:09:11.869994 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 14:09:11.907987 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.908158 | orchestrator | 2025-08-29 14:09:11.908198 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 14:09:11.946355 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:11.946409 | orchestrator | 2025-08-29 14:09:11.946417 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 14:09:12.876766 | orchestrator | changed: [testbed-manager] 2025-08-29 14:09:12.876839 | orchestrator | 2025-08-29 14:09:12.876846 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 14:12:08.546842 | orchestrator | changed: [testbed-manager] 2025-08-29 14:12:08.546932 | orchestrator | 2025-08-29 14:12:08.546951 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:13:24.378939 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:24.379004 | orchestrator | 2025-08-29 14:13:24.379017 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:13:44.237330 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:44.237404 | orchestrator | 2025-08-29 14:13:44.237450 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:13:52.411042 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:52.412532 | orchestrator | 2025-08-29 14:13:52.412562 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:13:52.458187 | orchestrator | ok: [testbed-manager] 2025-08-29 14:13:52.458235 | orchestrator | 2025-08-29 14:13:52.458248 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 14:13:53.257201 | orchestrator | ok: [testbed-manager] 2025-08-29 14:13:53.257255 | orchestrator | 2025-08-29 14:13:53.257265 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 14:13:54.078075 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:54.078157 | orchestrator | 2025-08-29 14:13:54.078172 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 14:14:00.159718 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:00.159786 | orchestrator | 2025-08-29 14:14:00.159831 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 14:14:05.965256 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:05.965346 | orchestrator | 2025-08-29 14:14:05.965364 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 14:14:08.694194 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:08.694283 | orchestrator | 2025-08-29 14:14:08.694299 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 14:14:10.399708 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:10.399786 | orchestrator | 2025-08-29 14:14:10.399801 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 14:14:11.492055 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:14:11.492356 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:14:11.492372 | orchestrator | 2025-08-29 14:14:11.492385 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 14:14:11.538296 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:14:11.538368 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:14:11.538381 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:14:11.538415 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:14:21.705513 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:14:21.705598 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:14:21.705613 | orchestrator | 2025-08-29 14:14:21.705626 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 14:14:22.277512 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:22.278968 | orchestrator | 2025-08-29 14:14:22.279003 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 14:15:44.225887 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 14:15:44.225959 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 14:15:44.225976 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 14:15:44.225988 | orchestrator | 2025-08-29 14:15:44.226001 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 14:15:46.472736 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 14:15:46.472802 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 14:15:46.472815 | orchestrator | 2025-08-29 14:15:46.472827 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 14:15:46.472839 | orchestrator | 2025-08-29 14:15:46.472851 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:15:47.827060 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:47.827147 | orchestrator | 2025-08-29 14:15:47.827169 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:15:47.879478 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:47.879559 | orchestrator | 2025-08-29 14:15:47.879572 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:15:47.962348 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:47.962439 | orchestrator | 2025-08-29 14:15:47.962456 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:15:48.717745 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:48.717829 | orchestrator | 2025-08-29 14:15:48.717845 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:15:49.428631 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:49.428722 | orchestrator | 2025-08-29 14:15:49.428739 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:15:51.120361 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 14:15:51.120405 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 14:15:51.120413 | orchestrator | 2025-08-29 14:15:51.120429 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:15:52.517705 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:52.517897 | orchestrator | 2025-08-29 14:15:52.517916 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:15:54.296770 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:15:54.296835 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 14:15:54.296847 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:15:54.296858 | orchestrator | 2025-08-29 14:15:54.296870 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:15:54.364490 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:54.364549 | orchestrator | 2025-08-29 14:15:54.364559 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:15:54.944190 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:54.944255 | orchestrator | 2025-08-29 14:15:54.944295 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:15:55.014917 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:55.014986 | orchestrator | 2025-08-29 14:15:55.015002 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:15:55.889498 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:15:55.889559 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:55.889572 | orchestrator | 2025-08-29 14:15:55.889583 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:15:55.932102 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:55.932174 | orchestrator | 2025-08-29 14:15:55.932188 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:15:55.964652 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:55.964726 | orchestrator | 2025-08-29 14:15:55.964740 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:15:55.997359 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:55.997439 | orchestrator | 2025-08-29 14:15:55.997454 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:15:56.063601 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:56.063679 | orchestrator | 2025-08-29 14:15:56.063696 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:15:56.797171 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:56.797254 | orchestrator | 2025-08-29 14:15:56.797294 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:15:56.797306 | orchestrator | 2025-08-29 14:15:56.797318 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:15:58.198282 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:58.198375 | orchestrator | 2025-08-29 14:15:58.198391 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 14:15:59.167810 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:59.167893 | orchestrator | 2025-08-29 14:15:59.167910 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:15:59.167925 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:15:59.167938 | orchestrator | 2025-08-29 14:15:59.487289 | orchestrator | ok: Runtime: 0:06:55.702242 2025-08-29 14:15:59.505770 | 2025-08-29 14:15:59.505914 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 14:15:59.541742 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 14:15:59.549318 | 2025-08-29 14:15:59.549446 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:15:59.589935 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 14:15:59.597302 | 2025-08-29 14:15:59.597462 | TASK [Run manager part 1 + 2] 2025-08-29 14:16:00.891540 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:16:00.963730 | orchestrator | 2025-08-29 14:16:00.963818 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 14:16:00.963830 | orchestrator | 2025-08-29 14:16:00.963851 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:16:03.923197 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:03.923295 | orchestrator | 2025-08-29 14:16:03.923347 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:16:03.964299 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:16:03.964366 | orchestrator | 2025-08-29 14:16:03.964385 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:16:04.004179 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:04.004292 | orchestrator | 2025-08-29 14:16:04.004320 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:16:04.040823 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:04.040896 | orchestrator | 2025-08-29 14:16:04.040912 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:16:04.104526 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:04.104601 | orchestrator | 2025-08-29 14:16:04.104620 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:16:04.161581 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:04.161620 | orchestrator | 2025-08-29 14:16:04.161628 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:16:04.208760 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 14:16:04.208790 | orchestrator | 2025-08-29 14:16:04.208796 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:16:04.858202 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:04.858242 | orchestrator | 2025-08-29 14:16:04.858262 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:16:04.908953 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:16:04.909007 | orchestrator | 2025-08-29 14:16:04.909018 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:16:06.142010 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:06.142162 | orchestrator | 2025-08-29 14:16:06.142172 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:16:06.665979 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:06.666038 | orchestrator | 2025-08-29 14:16:06.666047 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:16:07.717509 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:07.717549 | orchestrator | 2025-08-29 14:16:07.717557 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:16:23.967466 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:23.967508 | orchestrator | 2025-08-29 14:16:23.967514 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:16:24.598274 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:24.598353 | orchestrator | 2025-08-29 14:16:24.598372 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:16:24.655003 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:16:24.655047 | orchestrator | 2025-08-29 14:16:24.655056 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 14:16:25.513630 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:25.513685 | orchestrator | 2025-08-29 14:16:25.513699 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 14:16:26.410194 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:26.410289 | orchestrator | 2025-08-29 14:16:26.410304 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 14:16:26.932666 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:26.932737 | orchestrator | 2025-08-29 14:16:26.932752 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 14:16:26.972188 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:16:26.972300 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:16:26.972315 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:16:26.972327 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:16:32.692878 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:32.692928 | orchestrator | 2025-08-29 14:16:32.692936 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 14:16:41.713650 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 14:16:41.713743 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 14:16:41.713759 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 14:16:41.713771 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 14:16:41.713790 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 14:16:41.713801 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 14:16:41.713813 | orchestrator | 2025-08-29 14:16:41.713825 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 14:16:42.801839 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:42.801931 | orchestrator | 2025-08-29 14:16:42.801948 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 14:16:42.847479 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:16:42.847569 | orchestrator | 2025-08-29 14:16:42.847584 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 14:16:46.028825 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:46.028887 | orchestrator | 2025-08-29 14:16:46.028897 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 14:16:46.071558 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:16:46.071608 | orchestrator | 2025-08-29 14:16:46.071617 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 14:18:19.424701 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:19.424789 | orchestrator | 2025-08-29 14:18:19.424807 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:18:20.527714 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:20.527754 | orchestrator | 2025-08-29 14:18:20.527762 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:18:20.527770 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 14:18:20.527776 | orchestrator | 2025-08-29 14:18:20.727783 | orchestrator | ok: Runtime: 0:02:20.730512 2025-08-29 14:18:20.744358 | 2025-08-29 14:18:20.744512 | TASK [Reboot manager] 2025-08-29 14:18:22.286513 | orchestrator | ok: Runtime: 0:00:00.956257 2025-08-29 14:18:22.304195 | 2025-08-29 14:18:22.304399 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:18:36.926218 | orchestrator | ok 2025-08-29 14:18:36.936514 | 2025-08-29 14:18:36.936635 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:19:36.980496 | orchestrator | ok 2025-08-29 14:19:36.989693 | 2025-08-29 14:19:36.989825 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 14:19:39.448703 | orchestrator | 2025-08-29 14:19:39.519580 | orchestrator | # DEPLOY MANAGER 2025-08-29 14:19:39.519654 | orchestrator | 2025-08-29 14:19:39.519669 | orchestrator | + set -e 2025-08-29 14:19:39.519683 | orchestrator | + echo 2025-08-29 14:19:39.519697 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 14:19:39.519722 | orchestrator | + echo 2025-08-29 14:19:39.519771 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 14:19:39.519812 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 14:19:39.519827 | orchestrator | 2025-08-29 14:19:39.519839 | orchestrator | export CEPH_VERSION=reef 2025-08-29 14:19:39.519852 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 14:19:39.519864 | orchestrator | export MANAGER_VERSION=9.2.0 2025-08-29 14:19:39.519885 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 14:19:39.519896 | orchestrator | 2025-08-29 14:19:39.519914 | orchestrator | export ARA=false 2025-08-29 14:19:39.519925 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 14:19:39.519941 | orchestrator | export TEMPEST=false 2025-08-29 14:19:39.519953 | orchestrator | export IS_ZUUL=true 2025-08-29 14:19:39.519964 | orchestrator | 2025-08-29 14:19:39.519981 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:19:39.519992 | orchestrator | export EXTERNAL_API=false 2025-08-29 14:19:39.520003 | orchestrator | 2025-08-29 14:19:39.520013 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 14:19:39.520026 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 14:19:39.520037 | orchestrator | 2025-08-29 14:19:39.520048 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 14:19:39.520058 | orchestrator | 2025-08-29 14:19:39.520160 | orchestrator | + echo 2025-08-29 14:19:39.520174 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:19:39.520185 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:19:39.520195 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:19:39.520233 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:19:39.520245 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:19:39.520257 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:19:39.520267 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:19:39.520278 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:19:39.520288 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:19:39.520298 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:19:39.520309 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:19:39.520320 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:19:39.520330 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 14:19:39.520341 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 14:19:39.520351 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:19:39.520373 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:19:39.520384 | orchestrator | ++ export ARA=false 2025-08-29 14:19:39.520395 | orchestrator | ++ ARA=false 2025-08-29 14:19:39.520406 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:19:39.520416 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:19:39.520427 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:19:39.520437 | orchestrator | ++ TEMPEST=false 2025-08-29 14:19:39.520448 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:19:39.520458 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:19:39.520469 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:19:39.520480 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:19:39.520490 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:19:39.520500 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:19:39.520511 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:19:39.520521 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:19:39.520533 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:19:39.520543 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:19:39.520554 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:19:39.520564 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:19:39.520575 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 14:19:39.520587 | orchestrator | + docker version 2025-08-29 14:19:39.748434 | orchestrator | Client: Docker Engine - Community 2025-08-29 14:19:39.748504 | orchestrator | Version: 27.5.1 2025-08-29 14:19:39.748513 | orchestrator | API version: 1.47 2025-08-29 14:19:39.748519 | orchestrator | Go version: go1.22.11 2025-08-29 14:19:39.748525 | orchestrator | Git commit: 9f9e405 2025-08-29 14:19:39.748530 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:19:39.748537 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:19:39.748542 | orchestrator | Context: default 2025-08-29 14:19:39.748548 | orchestrator | 2025-08-29 14:19:39.748554 | orchestrator | Server: Docker Engine - Community 2025-08-29 14:19:39.748559 | orchestrator | Engine: 2025-08-29 14:19:39.748566 | orchestrator | Version: 27.5.1 2025-08-29 14:19:39.748571 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 14:19:39.748597 | orchestrator | Go version: go1.22.11 2025-08-29 14:19:39.748603 | orchestrator | Git commit: 4c9b3b0 2025-08-29 14:19:39.748608 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:19:39.748614 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:19:39.748619 | orchestrator | Experimental: false 2025-08-29 14:19:39.748625 | orchestrator | containerd: 2025-08-29 14:19:39.748631 | orchestrator | Version: 1.7.27 2025-08-29 14:19:39.748636 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 14:19:39.748642 | orchestrator | runc: 2025-08-29 14:19:39.748647 | orchestrator | Version: 1.2.5 2025-08-29 14:19:39.748653 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 14:19:39.748658 | orchestrator | docker-init: 2025-08-29 14:19:39.748670 | orchestrator | Version: 0.19.0 2025-08-29 14:19:39.748676 | orchestrator | GitCommit: de40ad0 2025-08-29 14:19:39.751305 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 14:19:39.760740 | orchestrator | + set -e 2025-08-29 14:19:39.760770 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:19:39.760776 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:19:39.760790 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:19:39.760796 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:19:39.760801 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:19:39.760807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:19:39.760813 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:19:39.760819 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 14:19:39.760824 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 14:19:39.760830 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:19:39.760835 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:19:39.760840 | orchestrator | ++ export ARA=false 2025-08-29 14:19:39.760846 | orchestrator | ++ ARA=false 2025-08-29 14:19:39.760852 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:19:39.760857 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:19:39.760863 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:19:39.760868 | orchestrator | ++ TEMPEST=false 2025-08-29 14:19:39.760878 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:19:39.760884 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:19:39.760889 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:19:39.760894 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:19:39.760900 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:19:39.760905 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:19:39.760911 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:19:39.760916 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:19:39.760921 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:19:39.760926 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:19:39.760932 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:19:39.760937 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:19:39.760942 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:19:39.760948 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:19:39.760953 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:19:39.760958 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:19:39.760980 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:19:39.761093 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 14:19:39.761102 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-08-29 14:19:39.768362 | orchestrator | + set -e 2025-08-29 14:19:39.768377 | orchestrator | + VERSION=9.2.0 2025-08-29 14:19:39.768386 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:19:39.775358 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 14:19:39.775387 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:19:39.778844 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:19:39.782459 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-08-29 14:19:39.791168 | orchestrator | /opt/configuration ~ 2025-08-29 14:19:39.791196 | orchestrator | + set -e 2025-08-29 14:19:39.791209 | orchestrator | + pushd /opt/configuration 2025-08-29 14:19:39.791221 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:19:39.792715 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 14:19:39.794231 | orchestrator | ++ deactivate nondestructive 2025-08-29 14:19:39.794258 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:39.794270 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:39.794312 | orchestrator | ++ hash -r 2025-08-29 14:19:39.794324 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:39.794334 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 14:19:39.794345 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 14:19:39.794355 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 14:19:39.794366 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 14:19:39.794376 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 14:19:39.794410 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 14:19:39.794423 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 14:19:39.794440 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:19:39.794451 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:19:39.794462 | orchestrator | ++ export PATH 2025-08-29 14:19:39.794473 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:39.794484 | orchestrator | ++ '[' -z '' ']' 2025-08-29 14:19:39.794494 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 14:19:39.794504 | orchestrator | ++ PS1='(venv) ' 2025-08-29 14:19:39.794515 | orchestrator | ++ export PS1 2025-08-29 14:19:39.794526 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 14:19:39.794536 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 14:19:39.794547 | orchestrator | ++ hash -r 2025-08-29 14:19:39.794561 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-08-29 14:19:40.763965 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-08-29 14:19:40.764543 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-08-29 14:19:40.765974 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-08-29 14:19:40.767141 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-08-29 14:19:40.768204 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-08-29 14:19:40.778268 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-08-29 14:19:40.779828 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-08-29 14:19:40.780868 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-08-29 14:19:40.782152 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-08-29 14:19:40.812389 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-08-29 14:19:40.813954 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-08-29 14:19:40.815574 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-08-29 14:19:40.816840 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-08-29 14:19:40.820869 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-08-29 14:19:41.018534 | orchestrator | ++ which gilt 2025-08-29 14:19:41.022231 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-08-29 14:19:41.022266 | orchestrator | + /opt/venv/bin/gilt overlay 2025-08-29 14:19:41.231803 | orchestrator | osism.cfg-generics: 2025-08-29 14:19:41.387416 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-08-29 14:19:41.387513 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-08-29 14:19:41.387539 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-08-29 14:19:41.387552 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-08-29 14:19:41.974489 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-08-29 14:19:41.987297 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-08-29 14:19:42.309769 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-08-29 14:19:42.372389 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:19:42.372478 | orchestrator | + deactivate 2025-08-29 14:19:42.372493 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 14:19:42.372505 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:19:42.372516 | orchestrator | + export PATH 2025-08-29 14:19:42.372528 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 14:19:42.372540 | orchestrator | + '[' -n '' ']' 2025-08-29 14:19:42.372552 | orchestrator | + hash -r 2025-08-29 14:19:42.372563 | orchestrator | + '[' -n '' ']' 2025-08-29 14:19:42.372573 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 14:19:42.372584 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 14:19:42.372595 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 14:19:42.372605 | orchestrator | + unset -f deactivate 2025-08-29 14:19:42.372616 | orchestrator | + popd 2025-08-29 14:19:42.372627 | orchestrator | ~ 2025-08-29 14:19:42.374907 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 14:19:42.374931 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 14:19:42.375560 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 14:19:42.437518 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:19:42.437603 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 14:19:42.437617 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 14:19:42.533544 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:19:42.533732 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 14:19:42.533762 | orchestrator | ++ deactivate nondestructive 2025-08-29 14:19:42.533774 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:42.533801 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:42.533887 | orchestrator | ++ hash -r 2025-08-29 14:19:42.534162 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:42.534219 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 14:19:42.534232 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 14:19:42.534260 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 14:19:42.534335 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 14:19:42.534439 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 14:19:42.535596 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 14:19:42.535615 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 14:19:42.535628 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:19:42.535641 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:19:42.535672 | orchestrator | ++ export PATH 2025-08-29 14:19:42.535684 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:19:42.535695 | orchestrator | ++ '[' -z '' ']' 2025-08-29 14:19:42.535705 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 14:19:42.535716 | orchestrator | ++ PS1='(venv) ' 2025-08-29 14:19:42.535727 | orchestrator | ++ export PS1 2025-08-29 14:19:42.535738 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 14:19:42.535749 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 14:19:42.535760 | orchestrator | ++ hash -r 2025-08-29 14:19:42.535771 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 14:19:43.649316 | orchestrator | 2025-08-29 14:19:43.649444 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 14:19:43.649462 | orchestrator | 2025-08-29 14:19:43.649477 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:19:44.226934 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:44.227032 | orchestrator | 2025-08-29 14:19:44.227049 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:19:45.241626 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:45.241722 | orchestrator | 2025-08-29 14:19:45.241737 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 14:19:45.241750 | orchestrator | 2025-08-29 14:19:45.241761 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:19:48.429882 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:48.430005 | orchestrator | 2025-08-29 14:19:48.430114 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 14:19:48.477932 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:48.478103 | orchestrator | 2025-08-29 14:19:48.478123 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 14:19:48.908481 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:48.908584 | orchestrator | 2025-08-29 14:19:48.908602 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 14:19:48.930844 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:19:48.930921 | orchestrator | 2025-08-29 14:19:48.930928 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:19:49.252321 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:49.252435 | orchestrator | 2025-08-29 14:19:49.252448 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 14:19:49.307480 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:19:49.307580 | orchestrator | 2025-08-29 14:19:49.307595 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 14:19:49.625349 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:49.625449 | orchestrator | 2025-08-29 14:19:49.625465 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 14:19:49.729609 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:19:49.729685 | orchestrator | 2025-08-29 14:19:49.729698 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 14:19:49.729710 | orchestrator | 2025-08-29 14:19:49.729721 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:19:51.444819 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:51.444919 | orchestrator | 2025-08-29 14:19:51.444933 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 14:19:51.560537 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 14:19:51.560621 | orchestrator | 2025-08-29 14:19:51.560632 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 14:19:51.621486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 14:19:51.621559 | orchestrator | 2025-08-29 14:19:51.621571 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 14:19:52.741106 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 14:19:52.741188 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 14:19:52.741199 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 14:19:52.741206 | orchestrator | 2025-08-29 14:19:52.741214 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 14:19:54.600474 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 14:19:54.600571 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 14:19:54.600585 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 14:19:54.600598 | orchestrator | 2025-08-29 14:19:54.600611 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 14:19:55.247273 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:19:55.247364 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:55.247380 | orchestrator | 2025-08-29 14:19:55.247393 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 14:19:55.909128 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:19:55.909220 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:55.909236 | orchestrator | 2025-08-29 14:19:55.909248 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 14:19:55.969949 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:19:55.970083 | orchestrator | 2025-08-29 14:19:55.970100 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 14:19:56.363982 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:56.364123 | orchestrator | 2025-08-29 14:19:56.364152 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 14:19:56.440492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 14:19:56.440567 | orchestrator | 2025-08-29 14:19:56.440578 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 14:19:57.522221 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:57.522332 | orchestrator | 2025-08-29 14:19:57.522349 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 14:19:58.292853 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:58.293743 | orchestrator | 2025-08-29 14:19:58.293776 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 14:20:08.755068 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:08.755139 | orchestrator | 2025-08-29 14:20:08.755157 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 14:20:08.804866 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:08.804926 | orchestrator | 2025-08-29 14:20:08.804933 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 14:20:08.804938 | orchestrator | 2025-08-29 14:20:08.804942 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:20:10.257259 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:10.257462 | orchestrator | 2025-08-29 14:20:10.257473 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 14:20:10.381867 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 14:20:10.381971 | orchestrator | 2025-08-29 14:20:10.381984 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 14:20:10.449307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:20:10.449398 | orchestrator | 2025-08-29 14:20:10.449406 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 14:20:12.804693 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:12.804761 | orchestrator | 2025-08-29 14:20:12.804768 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 14:20:12.845658 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:12.845712 | orchestrator | 2025-08-29 14:20:12.845718 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 14:20:12.962743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 14:20:12.962807 | orchestrator | 2025-08-29 14:20:12.962813 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 14:20:15.527723 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 14:20:15.527793 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 14:20:15.527799 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 14:20:15.527804 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 14:20:15.527808 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 14:20:15.527813 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 14:20:15.527817 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 14:20:15.527820 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 14:20:15.527825 | orchestrator | 2025-08-29 14:20:15.527831 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 14:20:16.141304 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:16.141367 | orchestrator | 2025-08-29 14:20:16.141373 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 14:20:16.707696 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:16.707761 | orchestrator | 2025-08-29 14:20:16.707767 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 14:20:16.780098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 14:20:16.780144 | orchestrator | 2025-08-29 14:20:16.780149 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 14:20:17.898277 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 14:20:17.898323 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 14:20:17.898328 | orchestrator | 2025-08-29 14:20:17.898332 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 14:20:18.459528 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:18.459575 | orchestrator | 2025-08-29 14:20:18.459579 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 14:20:18.506976 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:18.506994 | orchestrator | 2025-08-29 14:20:18.506998 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 14:20:18.556484 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:18.556516 | orchestrator | 2025-08-29 14:20:18.556520 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 14:20:18.603334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 14:20:18.603361 | orchestrator | 2025-08-29 14:20:18.603365 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 14:20:19.914281 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:20:19.914385 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:20:19.914398 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:19.914409 | orchestrator | 2025-08-29 14:20:19.914420 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 14:20:20.524402 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:20.524505 | orchestrator | 2025-08-29 14:20:20.524519 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 14:20:20.574532 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:20.574621 | orchestrator | 2025-08-29 14:20:20.574638 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 14:20:20.661577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 14:20:20.661659 | orchestrator | 2025-08-29 14:20:20.661675 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 14:20:21.183786 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:21.183913 | orchestrator | 2025-08-29 14:20:21.183929 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 14:20:21.576602 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:21.576693 | orchestrator | 2025-08-29 14:20:21.576707 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 14:20:22.803192 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 14:20:22.803298 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 14:20:22.803312 | orchestrator | 2025-08-29 14:20:22.803327 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 14:20:23.455945 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:23.456041 | orchestrator | 2025-08-29 14:20:23.456089 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 14:20:23.850996 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:23.851121 | orchestrator | 2025-08-29 14:20:23.851136 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 14:20:24.198310 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:24.198428 | orchestrator | 2025-08-29 14:20:24.198453 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 14:20:24.241237 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:24.241325 | orchestrator | 2025-08-29 14:20:24.241338 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 14:20:24.304469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 14:20:24.304566 | orchestrator | 2025-08-29 14:20:24.304580 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 14:20:24.348612 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:24.348731 | orchestrator | 2025-08-29 14:20:24.348745 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 14:20:26.394808 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 14:20:26.394967 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 14:20:26.394993 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 14:20:26.395117 | orchestrator | 2025-08-29 14:20:26.395143 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 14:20:27.083381 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:27.083489 | orchestrator | 2025-08-29 14:20:27.083505 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 14:20:27.792086 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:27.792191 | orchestrator | 2025-08-29 14:20:27.792205 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 14:20:28.487769 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:28.487889 | orchestrator | 2025-08-29 14:20:28.487915 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 14:20:28.551350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 14:20:28.551460 | orchestrator | 2025-08-29 14:20:28.551482 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 14:20:28.600002 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:28.600135 | orchestrator | 2025-08-29 14:20:28.600149 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 14:20:29.288542 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 14:20:29.288648 | orchestrator | 2025-08-29 14:20:29.288664 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 14:20:29.367984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 14:20:29.368041 | orchestrator | 2025-08-29 14:20:29.368106 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 14:20:30.071906 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:30.072015 | orchestrator | 2025-08-29 14:20:30.072032 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 14:20:30.625967 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:30.626145 | orchestrator | 2025-08-29 14:20:30.626159 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 14:20:30.674800 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:30.674861 | orchestrator | 2025-08-29 14:20:30.674872 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 14:20:30.729440 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:30.729522 | orchestrator | 2025-08-29 14:20:30.729538 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 14:20:31.549314 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:31.549441 | orchestrator | 2025-08-29 14:20:31.549457 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 14:21:35.787710 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:35.787816 | orchestrator | 2025-08-29 14:21:35.787832 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 14:21:36.746707 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:36.746797 | orchestrator | 2025-08-29 14:21:36.746808 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 14:21:36.791901 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:36.791985 | orchestrator | 2025-08-29 14:21:36.791996 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 14:21:39.283372 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:39.283477 | orchestrator | 2025-08-29 14:21:39.283492 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 14:21:39.332416 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:39.332452 | orchestrator | 2025-08-29 14:21:39.332464 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:21:39.332475 | orchestrator | 2025-08-29 14:21:39.332516 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 14:21:39.379450 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:39.379529 | orchestrator | 2025-08-29 14:21:39.379544 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 14:22:39.467333 | orchestrator | Pausing for 60 seconds 2025-08-29 14:22:39.467437 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:39.467451 | orchestrator | 2025-08-29 14:22:39.467464 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 14:22:42.494797 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:42.494925 | orchestrator | 2025-08-29 14:22:42.494943 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 14:23:24.056936 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 14:23:24.057201 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 14:23:24.057226 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:24.057250 | orchestrator | 2025-08-29 14:23:24.057290 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 14:23:33.284514 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:33.284661 | orchestrator | 2025-08-29 14:23:33.284678 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 14:23:33.374335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 14:23:33.374440 | orchestrator | 2025-08-29 14:23:33.374453 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:23:33.374465 | orchestrator | 2025-08-29 14:23:33.374477 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 14:23:33.427803 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:33.427837 | orchestrator | 2025-08-29 14:23:33.427849 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:23:33.427861 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:23:33.427873 | orchestrator | 2025-08-29 14:23:33.518657 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:23:33.518725 | orchestrator | + deactivate 2025-08-29 14:23:33.518746 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 14:23:33.518760 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:23:33.518773 | orchestrator | + export PATH 2025-08-29 14:23:33.518784 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 14:23:33.518795 | orchestrator | + '[' -n '' ']' 2025-08-29 14:23:33.518807 | orchestrator | + hash -r 2025-08-29 14:23:33.518817 | orchestrator | + '[' -n '' ']' 2025-08-29 14:23:33.518828 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 14:23:33.518839 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 14:23:33.518850 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 14:23:33.518860 | orchestrator | + unset -f deactivate 2025-08-29 14:23:33.518872 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 14:23:33.526538 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:23:33.526562 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:23:33.526574 | orchestrator | + local max_attempts=60 2025-08-29 14:23:33.526584 | orchestrator | + local name=ceph-ansible 2025-08-29 14:23:33.526595 | orchestrator | + local attempt_num=1 2025-08-29 14:23:33.527358 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:23:33.560546 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:23:33.560624 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:23:33.560637 | orchestrator | + local max_attempts=60 2025-08-29 14:23:33.560649 | orchestrator | + local name=kolla-ansible 2025-08-29 14:23:33.560660 | orchestrator | + local attempt_num=1 2025-08-29 14:23:33.561328 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:23:33.602223 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:23:33.602287 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:23:33.602339 | orchestrator | + local max_attempts=60 2025-08-29 14:23:33.602353 | orchestrator | + local name=osism-ansible 2025-08-29 14:23:33.602364 | orchestrator | + local attempt_num=1 2025-08-29 14:23:33.602773 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:23:33.631161 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:23:33.631220 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:23:33.631237 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:23:34.362270 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 14:23:34.572421 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 14:23:34.572561 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572581 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572595 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 14:23:34.572608 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-08-29 14:23:34.572619 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572630 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572641 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-08-29 14:23:34.572652 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572664 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-08-29 14:23:34.572675 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572686 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-08-29 14:23:34.572697 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572708 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.572719 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-08-29 14:23:34.579931 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 14:23:34.625920 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:23:34.626064 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 14:23:34.628475 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 14:23:46.670899 | orchestrator | 2025-08-29 14:23:46 | INFO  | Task 6f98d4ab-be04-4e57-a18b-977b10dbbbd8 (resolvconf) was prepared for execution. 2025-08-29 14:23:46.671098 | orchestrator | 2025-08-29 14:23:46 | INFO  | It takes a moment until task 6f98d4ab-be04-4e57-a18b-977b10dbbbd8 (resolvconf) has been started and output is visible here. 2025-08-29 14:23:59.265856 | orchestrator | 2025-08-29 14:23:59.266114 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 14:23:59.266137 | orchestrator | 2025-08-29 14:23:59.266149 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:23:59.266161 | orchestrator | Friday 29 August 2025 14:23:50 +0000 (0:00:00.107) 0:00:00.107 ********* 2025-08-29 14:23:59.266173 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:59.266185 | orchestrator | 2025-08-29 14:23:59.266197 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:23:59.266208 | orchestrator | Friday 29 August 2025 14:23:53 +0000 (0:00:03.353) 0:00:03.460 ********* 2025-08-29 14:23:59.266219 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:59.266231 | orchestrator | 2025-08-29 14:23:59.266242 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:23:59.266253 | orchestrator | Friday 29 August 2025 14:23:53 +0000 (0:00:00.060) 0:00:03.521 ********* 2025-08-29 14:23:59.266264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 14:23:59.266277 | orchestrator | 2025-08-29 14:23:59.266288 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:23:59.266299 | orchestrator | Friday 29 August 2025 14:23:53 +0000 (0:00:00.071) 0:00:03.592 ********* 2025-08-29 14:23:59.266310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:23:59.266321 | orchestrator | 2025-08-29 14:23:59.266332 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:23:59.266345 | orchestrator | Friday 29 August 2025 14:23:53 +0000 (0:00:00.082) 0:00:03.674 ********* 2025-08-29 14:23:59.266357 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:59.266369 | orchestrator | 2025-08-29 14:23:59.266381 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:23:59.266393 | orchestrator | Friday 29 August 2025 14:23:54 +0000 (0:00:00.859) 0:00:04.534 ********* 2025-08-29 14:23:59.266405 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:59.266417 | orchestrator | 2025-08-29 14:23:59.266429 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:23:59.266441 | orchestrator | Friday 29 August 2025 14:23:54 +0000 (0:00:00.062) 0:00:04.596 ********* 2025-08-29 14:23:59.266453 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:59.266464 | orchestrator | 2025-08-29 14:23:59.266478 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:23:59.266490 | orchestrator | Friday 29 August 2025 14:23:55 +0000 (0:00:00.477) 0:00:05.073 ********* 2025-08-29 14:23:59.266502 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:59.266514 | orchestrator | 2025-08-29 14:23:59.266526 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:23:59.266540 | orchestrator | Friday 29 August 2025 14:23:55 +0000 (0:00:00.078) 0:00:05.152 ********* 2025-08-29 14:23:59.266551 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:59.266563 | orchestrator | 2025-08-29 14:23:59.266575 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:23:59.266587 | orchestrator | Friday 29 August 2025 14:23:55 +0000 (0:00:00.495) 0:00:05.648 ********* 2025-08-29 14:23:59.266599 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:59.266638 | orchestrator | 2025-08-29 14:23:59.266650 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:23:59.266663 | orchestrator | Friday 29 August 2025 14:23:56 +0000 (0:00:01.037) 0:00:06.685 ********* 2025-08-29 14:23:59.266676 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:59.266687 | orchestrator | 2025-08-29 14:23:59.266698 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:23:59.266709 | orchestrator | Friday 29 August 2025 14:23:57 +0000 (0:00:00.930) 0:00:07.616 ********* 2025-08-29 14:23:59.266720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 14:23:59.266730 | orchestrator | 2025-08-29 14:23:59.266741 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:23:59.266751 | orchestrator | Friday 29 August 2025 14:23:57 +0000 (0:00:00.079) 0:00:07.695 ********* 2025-08-29 14:23:59.266762 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:59.266772 | orchestrator | 2025-08-29 14:23:59.266783 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:23:59.266808 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:23:59.266820 | orchestrator | 2025-08-29 14:23:59.266830 | orchestrator | 2025-08-29 14:23:59.266841 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:23:59.266851 | orchestrator | Friday 29 August 2025 14:23:59 +0000 (0:00:01.093) 0:00:08.789 ********* 2025-08-29 14:23:59.266862 | orchestrator | =============================================================================== 2025-08-29 14:23:59.266872 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-08-29 14:23:59.266883 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2025-08-29 14:23:59.266893 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-08-29 14:23:59.266904 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-08-29 14:23:59.266914 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.86s 2025-08-29 14:23:59.266925 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-08-29 14:23:59.266975 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-08-29 14:23:59.266988 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-08-29 14:23:59.266998 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-08-29 14:23:59.267009 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-08-29 14:23:59.267019 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-08-29 14:23:59.267030 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-08-29 14:23:59.267040 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-08-29 14:23:59.493659 | orchestrator | + osism apply sshconfig 2025-08-29 14:24:11.408069 | orchestrator | 2025-08-29 14:24:11 | INFO  | Task bef4d32b-591f-4711-9bf3-1fbbbd8c7ac7 (sshconfig) was prepared for execution. 2025-08-29 14:24:11.408205 | orchestrator | 2025-08-29 14:24:11 | INFO  | It takes a moment until task bef4d32b-591f-4711-9bf3-1fbbbd8c7ac7 (sshconfig) has been started and output is visible here. 2025-08-29 14:24:21.748050 | orchestrator | 2025-08-29 14:24:21.748223 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 14:24:21.748244 | orchestrator | 2025-08-29 14:24:21.748256 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 14:24:21.748268 | orchestrator | Friday 29 August 2025 14:24:14 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-08-29 14:24:21.748318 | orchestrator | ok: [testbed-manager] 2025-08-29 14:24:21.748332 | orchestrator | 2025-08-29 14:24:21.748343 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 14:24:21.748354 | orchestrator | Friday 29 August 2025 14:24:15 +0000 (0:00:00.450) 0:00:00.570 ********* 2025-08-29 14:24:21.748365 | orchestrator | changed: [testbed-manager] 2025-08-29 14:24:21.748377 | orchestrator | 2025-08-29 14:24:21.748388 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 14:24:21.748399 | orchestrator | Friday 29 August 2025 14:24:15 +0000 (0:00:00.440) 0:00:01.011 ********* 2025-08-29 14:24:21.748409 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:24:21.748421 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:24:21.748432 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:24:21.748443 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:24:21.748453 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:24:21.748464 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:24:21.748475 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:24:21.748485 | orchestrator | 2025-08-29 14:24:21.748496 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 14:24:21.748507 | orchestrator | Friday 29 August 2025 14:24:20 +0000 (0:00:05.044) 0:00:06.055 ********* 2025-08-29 14:24:21.748519 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:24:21.748532 | orchestrator | 2025-08-29 14:24:21.748544 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 14:24:21.748555 | orchestrator | Friday 29 August 2025 14:24:20 +0000 (0:00:00.063) 0:00:06.119 ********* 2025-08-29 14:24:21.748567 | orchestrator | changed: [testbed-manager] 2025-08-29 14:24:21.748579 | orchestrator | 2025-08-29 14:24:21.748592 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:24:21.748606 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:24:21.748619 | orchestrator | 2025-08-29 14:24:21.748631 | orchestrator | 2025-08-29 14:24:21.748644 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:24:21.748656 | orchestrator | Friday 29 August 2025 14:24:21 +0000 (0:00:00.571) 0:00:06.690 ********* 2025-08-29 14:24:21.748689 | orchestrator | =============================================================================== 2025-08-29 14:24:21.748702 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.04s 2025-08-29 14:24:21.748715 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-08-29 14:24:21.748727 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.45s 2025-08-29 14:24:21.748739 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.44s 2025-08-29 14:24:21.748751 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-08-29 14:24:22.021224 | orchestrator | + osism apply known-hosts 2025-08-29 14:24:33.944151 | orchestrator | 2025-08-29 14:24:33 | INFO  | Task 8db27d57-272e-486e-a4fb-771d42e07d4b (known-hosts) was prepared for execution. 2025-08-29 14:24:33.944275 | orchestrator | 2025-08-29 14:24:33 | INFO  | It takes a moment until task 8db27d57-272e-486e-a4fb-771d42e07d4b (known-hosts) has been started and output is visible here. 2025-08-29 14:24:50.833034 | orchestrator | 2025-08-29 14:24:50.833155 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 14:24:50.833172 | orchestrator | 2025-08-29 14:24:50.833185 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 14:24:50.833197 | orchestrator | Friday 29 August 2025 14:24:37 +0000 (0:00:00.158) 0:00:00.158 ********* 2025-08-29 14:24:50.833209 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:24:50.833246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:24:50.833257 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:24:50.833268 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:24:50.833279 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:24:50.833289 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:24:50.833300 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:24:50.833311 | orchestrator | 2025-08-29 14:24:50.833322 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 14:24:50.833334 | orchestrator | Friday 29 August 2025 14:24:43 +0000 (0:00:05.792) 0:00:05.951 ********* 2025-08-29 14:24:50.833346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:24:50.833359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:24:50.833370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:24:50.833381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:24:50.833391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:24:50.833402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:24:50.833413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:24:50.833423 | orchestrator | 2025-08-29 14:24:50.833434 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:50.833445 | orchestrator | Friday 29 August 2025 14:24:43 +0000 (0:00:00.161) 0:00:06.112 ********* 2025-08-29 14:24:50.833456 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEsn+Dnic/BPaV23xrGrbQ4PUS0m3CGUUf4+b6Yae2EU) 2025-08-29 14:24:50.833473 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4WJpZNRpSqTfKApBI0Qo8ygRYMgLKCdAUXhIFFU3AVyPZjyQ1uI3iuCUZzXMgakGDURg+6F70xusnpmKKijc7uUQc2743L2V1BZiaPZa/jOtv6Ps73xThhf72P1N+GWhw6uC0E62N9L9SCSsxgX5WG8szKG3fAplahhQKv3Cj9bbBoeECgNlHiBt26Otnrs2qWXpaDbNM9bGXNlV9sMJZ6GcA0+92A4+ddfZg2jHNI7aslArb1R89FaTa07yPk/78mFPqEKaUbPx2pdUcFK/wpZSudmz+t0a1JrXZ1TCjCliieipFoPZxP7fm1UrqwamflKswB3Cz/n+icKCWL0uuUXNIW/UMFqqGLblJI02x4CgAf9DfJF9JLOUNtRyDRnuTXX5czfunfRulcGcwqv4a7iKLw3aaPOsq/7Ju06s7Lyj1zWhCRJM2a7Rw5ewmB/NFiTWiI40l0D7F5bcEMg0Q/lPtCOu9N6r3Ft3Wf6OFO+xPivlfDVo0pbw04xXM/YU=) 2025-08-29 14:24:50.833557 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6GmIyg9N9m9L/6ttc21lHgmkaJxdChcf8mp24/Y/FwmGkPYHmOVvdLZMmCqLDMdt89n+04LXmBKvoXllc7r70=) 2025-08-29 14:24:50.833571 | orchestrator | 2025-08-29 14:24:50.833582 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:50.833593 | orchestrator | Friday 29 August 2025 14:24:44 +0000 (0:00:01.119) 0:00:07.232 ********* 2025-08-29 14:24:50.833624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfbPwVeBmg856XsgnO9RlALPSaAO/M7YvGHWQWWfYk3k2keojlP8zhx1NTIagmya5AW3DYmeXDaP7FJOHC2jsPSv7GUBP+BdNMaVaW26wmyyhIxIxPB/Rg/QWoYqWyPxOsvS6sUmI+h+1D6QwiYWZfhIEKt1TUpMdMVUMpvxw00cVP0c1CU4EFdYjfTrL9fXCv/Hz4MCRlE9qsdaBKyiZnKVe/M+mQQEKpDsYJkO9+03p7gu4gaBUQDPyFsPfJE/xsBt+g6D98OxlYBl8qQdSKN9z7jG0IRD9Snd6eiPPz0UXokv3klKoPC1cNVclFenOT6wPMZCl2mWhv1j9yK2CsIwjJiwqPkHwOaZ2mlTuJRhPEDybFWEenHX0IPyrMhlgwZXcYrQLbGwcuSi5J75iJIYYY0SNRc89NPXVvqORCpY1GhO6Oz2fTR5F1hvUCzkkyM+DyF+ytfLPAIHGEPJ7qkIFOX88H4QqeIb1aKRRidyq70XR13Oh1+Gi2H8zZK1k=) 2025-08-29 14:24:50.833645 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENNu4YLDUS300GMKYpYa22PLwKJjBnFAQtgBMD+dH2gfdXfaNcpCge9dizplNWb7OleLClaDRavRc+O30gglhw=) 2025-08-29 14:24:50.833656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrhwF2JayN8PCh/NKWxfgs01c/cXj3DeGJzDXCtcn61) 2025-08-29 14:24:50.833667 | orchestrator | 2025-08-29 14:24:50.833678 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:50.833688 | orchestrator | Friday 29 August 2025 14:24:45 +0000 (0:00:00.976) 0:00:08.209 ********* 2025-08-29 14:24:50.833700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIF7K+KIXERBdKH5bo0PVwKkopK55jit1qMQMSK/1zg+BkQ/8Qkp1Hp/fjo8+XxXUG7RXn7mtXaggVN9iK7rgryprlw2q34DiECXcuVbk9maq23ysdmI9qnDrBlyGvVYoQUTVPnJufddm9+5ywF7mIP+fOe22ljeB7dp81Tq1NwrRWLF7+BC5WiqCpclwrOOG4uM63mXH7y4WEDTyW8yIzcDO+WViqfHZrv9fXQQQHyBQPPl7qOFVNj7MU9RaaqQvG4uURc6QbKS56aj7XEcj/ljQImLQ9ytWySTFMj8c+/gdpjMnErfI8sWnenYpgOLFRQPE8T04foBLz5VakqR/0VMgiHdLNY6nsQFnIYdA6xx7TWDN7rbygaE7GoHde53fX0GTFUrfgMCq7d4VGkMB+XB9/d5KKyRBS1rxAGpBctqtq10z5d21FM/NdTyl1VvI2PpE7+LdL/hlJDTG+79CHbGqxin+rbJ6zT+pnWhrfjAZ/AtHUWex2hY9gotlqbYk=) 2025-08-29 14:24:50.833711 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlJbIOUFncdORReYjfl4MZhTzMrYjlTG+4Spu6Q4FH6/k8zTXs2D+OF21gXmG1FDnmmnA+43MX/Ci5MEK4Pk4I=) 2025-08-29 14:24:50.833722 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILgSs2OdM29JIX+rqhTK2ES2dlybxZHqA/Vfx5Zt9auK) 2025-08-29 14:24:50.833733 | orchestrator | 2025-08-29 14:24:50.833744 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:50.833755 | orchestrator | Friday 29 August 2025 14:24:46 +0000 (0:00:01.009) 0:00:09.218 ********* 2025-08-29 14:24:50.833766 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIrZi/wEchTMZFn37Wb5bYHAjR/rprgMBBPJQ0aegTnOj9h0Bue1W0zq9SG6e5s4B98x9/ObHDeqgcc3dbTNGSY6Bb5eEyFoXpPYfsZmiNOgAPI3WTkJ3rZmDmzEb5WUOz9hd9sc3F9JLfKOaVfBrC8U3bMlX+hP2bAf8r92q19sMZRkz8Hfz0RxUD0nt2k0MHQ+Y1TVqmvWL3IkKopt9WsTItqiv/xFehSgklmocz08YHsJxqaFgTOYiezay8GbF0xFLpV0C4jXXILcrcOZ/sBlcnP+a8JxBBK2k33VmCnktHuJ2rqGR8HQRjHQRGRTSQhaib4oerzD4nb+LJEvF9I5MrndQ8mg/INgMbA7dVNC/nVQJirV+h8UYOU1Yv6d/VZOhecq3oj+8MIL+1DsWZF/PR+U8voKtGYOh7g5vdz3eOhhUabDDA/V5uRef4koXR3GbY0xTwV5e+epgVt0HWWvTvKaQE1QBIUHUHjmriBispL4XWznmRb41+wc6q0zk=) 2025-08-29 14:24:50.833777 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGC5LbsjTqaSGNOveh4r4BoDu+lYqmrbLeriRFqf/BQzQ8THppf9TpwbwUrumfdxwslkW8HBvI7ityY5XyDCcOg=) 2025-08-29 14:24:50.833789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/aT7IieaKC3ZBE2WDXqxkS7iAUL3Xqi/wI4autEBmv) 2025-08-29 14:24:50.833800 | orchestrator | 2025-08-29 14:24:50.833811 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:50.833821 | orchestrator | Friday 29 August 2025 14:24:47 +0000 (0:00:00.993) 0:00:10.212 ********* 2025-08-29 14:24:50.833832 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPopJJ1jxhes8MOQ5pu1XYF/VtXUVfvwwpQHp3nCSXg8P56Z8e69fUIC0eNQLm7Z4kcoDuBwJikFFgNFSw+hRiI=) 2025-08-29 14:24:50.833843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnpfcwxkhEzIE98PapRIQrjQ/z9J4RCXu3Mm2z31j0L8tQqj5xVm/Gz7RB7NDpD0giBvTe0fnyU9thzWcOk37z6yYAtunucH8KMxRTd3+O98k1+TMSA+FkKfArKm0JWmreU5Ve/dF6wEXsAQ4Q88Ta3G+zdIW4prFaco1gQbSTY9rJYiAxpGTy81M+XW95HhxoZD4WVqoLyOYXO2aX7GthzXo8URAGjeZzpthWJnkcl+WbukezqXhzEVTrCmy2Xd47Nv7Is19qAhDPFXAvQM7gLXo5DNg7TbaO87+Div8AX7eLex+bt0OyWv9+rlIk/1X8FDcxajXWiBp9Zy1+4MUZBdJiUW0uzCpRWKHgXfDWbZgP5Gct+6gep6eMv7Jygh9C0oQsT4AgiAukkj/LmwqJNqf9jYh0fMlWApZg6FSUFmmODRQVERu6xXUjwKYiBr9hiQK3i/fLrWjVczAAeBGnuzALfYAOXoNvW79xORF1XrwrpJf79mRFMHT1VG5daPM=) 2025-08-29 14:24:50.833866 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDYVDYcBPkXRgA3AJh3U8zrks8flKa4oicOS1qvb+y4c) 2025-08-29 14:24:50.833877 | orchestrator | 2025-08-29 14:24:50.833888 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:50.833899 | orchestrator | Friday 29 August 2025 14:24:49 +0000 (0:00:02.030) 0:00:12.243 ********* 2025-08-29 14:24:50.833935 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMeCsA+qU1vzCY8Dq0ULy3Qiho99Hq/T3PqXrZOh80dF9+yXQQ7lOvRFhQMW5s4s8tp2Hv8mz8bg+ESTlfTJVlk=) 2025-08-29 14:25:01.158227 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm8e23y8JYC75uX7i0DBF6FDxFqeER0ZeXzA0s2D3YtPlElWGs0/2zUIdKtXwPmksWtFDqm2wd1dlkPU52z9WLBrsZ+AEiUqtEAWKZC7KBQ+1sD6HCtbWzWk4zqyoHRnOpH+ndP8wYiAosNS/+GlmM40yzJCRkR5fNfx0zFNLg/FL6W7Y5BZyF7GyQiYMFn2hPU5aKGQlAOCv+8JunsO99Q+Gznz3t45zL0o4n9WOSvsHih8CWkkHyyrIVg+moyAxPA19AaHsUxOzr4ln+soRC9F456KU7cyoE2z22ALVyxYUmmNtS+XjzbL0RoU1D92YvVyrg6DnXGl2z+udxMdM5HBDfvkGdIzWoymSj+pPjmAD15GbHy7dVu8dHRoYW9wZcSlVChJdZ2QSVaWnreiAW6pvboebsdJ7U9iFju1OcmORsMHXEL+mEBGaziyPdyTdXotFjonkcgDbCPqAHTrNjOFXDjVYuZOIMePNGIU8TpTs7X82bZfLubtd2PrIchG8=) 2025-08-29 14:25:01.158376 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIcgKebxmo1NRjm+waminexMtJoHafANS1JMhuXQqfp) 2025-08-29 14:25:01.158395 | orchestrator | 2025-08-29 14:25:01.158407 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:01.158420 | orchestrator | Friday 29 August 2025 14:24:50 +0000 (0:00:01.008) 0:00:13.251 ********* 2025-08-29 14:25:01.158433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtGxOh1JDqsc1vFxd7SK1QTiBMcIaPbCZpE/NblLKENpvQYcANZc/l21Huqw0fCTJY2duYaUhy18io0eZKAQgGUaHOo5Nwouh3TuLTuyBGbw+sekfQX8R+vgdwxrHwBu1rSNqzIjF3Mzbd0DXuDKh/buPDoEsBjlMFJZvgkz1+HH91qJsO4qw/CsxrJfYUSbzAw26+4bPJVye2dHN0a0ca7n2ht8xyCBCYjdYeAHDlHNPT3wy/cE5Gzu805UNWPSxZf9odzJ/N2GQCIAtbuc4spXuYMYz7Etm6lZWpKfomrysoGC4kOdz8wA/lSDeEDcsdejfMgZ7PkOC8GR63ghFI4DAdSnse8LXAUuldLmYwG7GR7ze+cWrO4dSQlQkAPD5ziw/IpDF+KqFh/xsBl3eKxCHpDJeLIWbvhyDI6gZyNnpp7KjsRGsA0i5BsN1UjZ9xQC4weUTXbKYi9pUDN3XRU7GDEmMFnM4uL8R69P6pE7rbx4IO58wHC9Ed+YaA7X0=) 2025-08-29 14:25:01.158445 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG++5TARTyyBNyJb3p3BzKQtn9XEQpPz9b/wKBa0QG4Jx6+R965NNEDyy/IeAfariKZ5v6rvg0h0MR1AWny9mwI=) 2025-08-29 14:25:01.159283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILfytUxRpBpf+ff/qNWpl1ZioYvW5gPk/4LHsln+pXop) 2025-08-29 14:25:01.159305 | orchestrator | 2025-08-29 14:25:01.159317 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 14:25:01.159329 | orchestrator | Friday 29 August 2025 14:24:51 +0000 (0:00:00.998) 0:00:14.250 ********* 2025-08-29 14:25:01.159340 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:25:01.159351 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:25:01.159362 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:25:01.159372 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:25:01.159383 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:25:01.159417 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:25:01.159429 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:25:01.159439 | orchestrator | 2025-08-29 14:25:01.159450 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 14:25:01.159461 | orchestrator | Friday 29 August 2025 14:24:56 +0000 (0:00:05.129) 0:00:19.379 ********* 2025-08-29 14:25:01.159474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:25:01.159487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:25:01.159498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:25:01.159508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:25:01.159519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:25:01.159529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:25:01.159540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:25:01.159550 | orchestrator | 2025-08-29 14:25:01.159580 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:01.159591 | orchestrator | Friday 29 August 2025 14:24:57 +0000 (0:00:00.155) 0:00:19.535 ********* 2025-08-29 14:25:01.159602 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEsn+Dnic/BPaV23xrGrbQ4PUS0m3CGUUf4+b6Yae2EU) 2025-08-29 14:25:01.159625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4WJpZNRpSqTfKApBI0Qo8ygRYMgLKCdAUXhIFFU3AVyPZjyQ1uI3iuCUZzXMgakGDURg+6F70xusnpmKKijc7uUQc2743L2V1BZiaPZa/jOtv6Ps73xThhf72P1N+GWhw6uC0E62N9L9SCSsxgX5WG8szKG3fAplahhQKv3Cj9bbBoeECgNlHiBt26Otnrs2qWXpaDbNM9bGXNlV9sMJZ6GcA0+92A4+ddfZg2jHNI7aslArb1R89FaTa07yPk/78mFPqEKaUbPx2pdUcFK/wpZSudmz+t0a1JrXZ1TCjCliieipFoPZxP7fm1UrqwamflKswB3Cz/n+icKCWL0uuUXNIW/UMFqqGLblJI02x4CgAf9DfJF9JLOUNtRyDRnuTXX5czfunfRulcGcwqv4a7iKLw3aaPOsq/7Ju06s7Lyj1zWhCRJM2a7Rw5ewmB/NFiTWiI40l0D7F5bcEMg0Q/lPtCOu9N6r3Ft3Wf6OFO+xPivlfDVo0pbw04xXM/YU=) 2025-08-29 14:25:01.159637 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6GmIyg9N9m9L/6ttc21lHgmkaJxdChcf8mp24/Y/FwmGkPYHmOVvdLZMmCqLDMdt89n+04LXmBKvoXllc7r70=) 2025-08-29 14:25:01.159649 | orchestrator | 2025-08-29 14:25:01.159659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:01.159670 | orchestrator | Friday 29 August 2025 14:24:58 +0000 (0:00:00.993) 0:00:20.528 ********* 2025-08-29 14:25:01.159681 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrhwF2JayN8PCh/NKWxfgs01c/cXj3DeGJzDXCtcn61) 2025-08-29 14:25:01.159692 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfbPwVeBmg856XsgnO9RlALPSaAO/M7YvGHWQWWfYk3k2keojlP8zhx1NTIagmya5AW3DYmeXDaP7FJOHC2jsPSv7GUBP+BdNMaVaW26wmyyhIxIxPB/Rg/QWoYqWyPxOsvS6sUmI+h+1D6QwiYWZfhIEKt1TUpMdMVUMpvxw00cVP0c1CU4EFdYjfTrL9fXCv/Hz4MCRlE9qsdaBKyiZnKVe/M+mQQEKpDsYJkO9+03p7gu4gaBUQDPyFsPfJE/xsBt+g6D98OxlYBl8qQdSKN9z7jG0IRD9Snd6eiPPz0UXokv3klKoPC1cNVclFenOT6wPMZCl2mWhv1j9yK2CsIwjJiwqPkHwOaZ2mlTuJRhPEDybFWEenHX0IPyrMhlgwZXcYrQLbGwcuSi5J75iJIYYY0SNRc89NPXVvqORCpY1GhO6Oz2fTR5F1hvUCzkkyM+DyF+ytfLPAIHGEPJ7qkIFOX88H4QqeIb1aKRRidyq70XR13Oh1+Gi2H8zZK1k=) 2025-08-29 14:25:01.159711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENNu4YLDUS300GMKYpYa22PLwKJjBnFAQtgBMD+dH2gfdXfaNcpCge9dizplNWb7OleLClaDRavRc+O30gglhw=) 2025-08-29 14:25:01.159722 | orchestrator | 2025-08-29 14:25:01.159733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:01.159743 | orchestrator | Friday 29 August 2025 14:24:59 +0000 (0:00:01.062) 0:00:21.591 ********* 2025-08-29 14:25:01.159755 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIF7K+KIXERBdKH5bo0PVwKkopK55jit1qMQMSK/1zg+BkQ/8Qkp1Hp/fjo8+XxXUG7RXn7mtXaggVN9iK7rgryprlw2q34DiECXcuVbk9maq23ysdmI9qnDrBlyGvVYoQUTVPnJufddm9+5ywF7mIP+fOe22ljeB7dp81Tq1NwrRWLF7+BC5WiqCpclwrOOG4uM63mXH7y4WEDTyW8yIzcDO+WViqfHZrv9fXQQQHyBQPPl7qOFVNj7MU9RaaqQvG4uURc6QbKS56aj7XEcj/ljQImLQ9ytWySTFMj8c+/gdpjMnErfI8sWnenYpgOLFRQPE8T04foBLz5VakqR/0VMgiHdLNY6nsQFnIYdA6xx7TWDN7rbygaE7GoHde53fX0GTFUrfgMCq7d4VGkMB+XB9/d5KKyRBS1rxAGpBctqtq10z5d21FM/NdTyl1VvI2PpE7+LdL/hlJDTG+79CHbGqxin+rbJ6zT+pnWhrfjAZ/AtHUWex2hY9gotlqbYk=) 2025-08-29 14:25:01.159766 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlJbIOUFncdORReYjfl4MZhTzMrYjlTG+4Spu6Q4FH6/k8zTXs2D+OF21gXmG1FDnmmnA+43MX/Ci5MEK4Pk4I=) 2025-08-29 14:25:01.159777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILgSs2OdM29JIX+rqhTK2ES2dlybxZHqA/Vfx5Zt9auK) 2025-08-29 14:25:01.159787 | orchestrator | 2025-08-29 14:25:01.159798 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:01.159809 | orchestrator | Friday 29 August 2025 14:25:00 +0000 (0:00:00.984) 0:00:22.575 ********* 2025-08-29 14:25:01.159829 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIrZi/wEchTMZFn37Wb5bYHAjR/rprgMBBPJQ0aegTnOj9h0Bue1W0zq9SG6e5s4B98x9/ObHDeqgcc3dbTNGSY6Bb5eEyFoXpPYfsZmiNOgAPI3WTkJ3rZmDmzEb5WUOz9hd9sc3F9JLfKOaVfBrC8U3bMlX+hP2bAf8r92q19sMZRkz8Hfz0RxUD0nt2k0MHQ+Y1TVqmvWL3IkKopt9WsTItqiv/xFehSgklmocz08YHsJxqaFgTOYiezay8GbF0xFLpV0C4jXXILcrcOZ/sBlcnP+a8JxBBK2k33VmCnktHuJ2rqGR8HQRjHQRGRTSQhaib4oerzD4nb+LJEvF9I5MrndQ8mg/INgMbA7dVNC/nVQJirV+h8UYOU1Yv6d/VZOhecq3oj+8MIL+1DsWZF/PR+U8voKtGYOh7g5vdz3eOhhUabDDA/V5uRef4koXR3GbY0xTwV5e+epgVt0HWWvTvKaQE1QBIUHUHjmriBispL4XWznmRb41+wc6q0zk=) 2025-08-29 14:25:05.231988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGC5LbsjTqaSGNOveh4r4BoDu+lYqmrbLeriRFqf/BQzQ8THppf9TpwbwUrumfdxwslkW8HBvI7ityY5XyDCcOg=) 2025-08-29 14:25:05.232125 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/aT7IieaKC3ZBE2WDXqxkS7iAUL3Xqi/wI4autEBmv) 2025-08-29 14:25:05.232142 | orchestrator | 2025-08-29 14:25:05.232155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:05.232168 | orchestrator | Friday 29 August 2025 14:25:01 +0000 (0:00:01.003) 0:00:23.579 ********* 2025-08-29 14:25:05.232180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPopJJ1jxhes8MOQ5pu1XYF/VtXUVfvwwpQHp3nCSXg8P56Z8e69fUIC0eNQLm7Z4kcoDuBwJikFFgNFSw+hRiI=) 2025-08-29 14:25:05.232194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnpfcwxkhEzIE98PapRIQrjQ/z9J4RCXu3Mm2z31j0L8tQqj5xVm/Gz7RB7NDpD0giBvTe0fnyU9thzWcOk37z6yYAtunucH8KMxRTd3+O98k1+TMSA+FkKfArKm0JWmreU5Ve/dF6wEXsAQ4Q88Ta3G+zdIW4prFaco1gQbSTY9rJYiAxpGTy81M+XW95HhxoZD4WVqoLyOYXO2aX7GthzXo8URAGjeZzpthWJnkcl+WbukezqXhzEVTrCmy2Xd47Nv7Is19qAhDPFXAvQM7gLXo5DNg7TbaO87+Div8AX7eLex+bt0OyWv9+rlIk/1X8FDcxajXWiBp9Zy1+4MUZBdJiUW0uzCpRWKHgXfDWbZgP5Gct+6gep6eMv7Jygh9C0oQsT4AgiAukkj/LmwqJNqf9jYh0fMlWApZg6FSUFmmODRQVERu6xXUjwKYiBr9hiQK3i/fLrWjVczAAeBGnuzALfYAOXoNvW79xORF1XrwrpJf79mRFMHT1VG5daPM=) 2025-08-29 14:25:05.232238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDYVDYcBPkXRgA3AJh3U8zrks8flKa4oicOS1qvb+y4c) 2025-08-29 14:25:05.232250 | orchestrator | 2025-08-29 14:25:05.232261 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:05.232271 | orchestrator | Friday 29 August 2025 14:25:02 +0000 (0:00:01.031) 0:00:24.611 ********* 2025-08-29 14:25:05.232283 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm8e23y8JYC75uX7i0DBF6FDxFqeER0ZeXzA0s2D3YtPlElWGs0/2zUIdKtXwPmksWtFDqm2wd1dlkPU52z9WLBrsZ+AEiUqtEAWKZC7KBQ+1sD6HCtbWzWk4zqyoHRnOpH+ndP8wYiAosNS/+GlmM40yzJCRkR5fNfx0zFNLg/FL6W7Y5BZyF7GyQiYMFn2hPU5aKGQlAOCv+8JunsO99Q+Gznz3t45zL0o4n9WOSvsHih8CWkkHyyrIVg+moyAxPA19AaHsUxOzr4ln+soRC9F456KU7cyoE2z22ALVyxYUmmNtS+XjzbL0RoU1D92YvVyrg6DnXGl2z+udxMdM5HBDfvkGdIzWoymSj+pPjmAD15GbHy7dVu8dHRoYW9wZcSlVChJdZ2QSVaWnreiAW6pvboebsdJ7U9iFju1OcmORsMHXEL+mEBGaziyPdyTdXotFjonkcgDbCPqAHTrNjOFXDjVYuZOIMePNGIU8TpTs7X82bZfLubtd2PrIchG8=) 2025-08-29 14:25:05.232316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMeCsA+qU1vzCY8Dq0ULy3Qiho99Hq/T3PqXrZOh80dF9+yXQQ7lOvRFhQMW5s4s8tp2Hv8mz8bg+ESTlfTJVlk=) 2025-08-29 14:25:05.232328 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIcgKebxmo1NRjm+waminexMtJoHafANS1JMhuXQqfp) 2025-08-29 14:25:05.232339 | orchestrator | 2025-08-29 14:25:05.232350 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:25:05.232361 | orchestrator | Friday 29 August 2025 14:25:03 +0000 (0:00:01.025) 0:00:25.636 ********* 2025-08-29 14:25:05.232372 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG++5TARTyyBNyJb3p3BzKQtn9XEQpPz9b/wKBa0QG4Jx6+R965NNEDyy/IeAfariKZ5v6rvg0h0MR1AWny9mwI=) 2025-08-29 14:25:05.232383 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtGxOh1JDqsc1vFxd7SK1QTiBMcIaPbCZpE/NblLKENpvQYcANZc/l21Huqw0fCTJY2duYaUhy18io0eZKAQgGUaHOo5Nwouh3TuLTuyBGbw+sekfQX8R+vgdwxrHwBu1rSNqzIjF3Mzbd0DXuDKh/buPDoEsBjlMFJZvgkz1+HH91qJsO4qw/CsxrJfYUSbzAw26+4bPJVye2dHN0a0ca7n2ht8xyCBCYjdYeAHDlHNPT3wy/cE5Gzu805UNWPSxZf9odzJ/N2GQCIAtbuc4spXuYMYz7Etm6lZWpKfomrysoGC4kOdz8wA/lSDeEDcsdejfMgZ7PkOC8GR63ghFI4DAdSnse8LXAUuldLmYwG7GR7ze+cWrO4dSQlQkAPD5ziw/IpDF+KqFh/xsBl3eKxCHpDJeLIWbvhyDI6gZyNnpp7KjsRGsA0i5BsN1UjZ9xQC4weUTXbKYi9pUDN3XRU7GDEmMFnM4uL8R69P6pE7rbx4IO58wHC9Ed+YaA7X0=) 2025-08-29 14:25:05.232394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILfytUxRpBpf+ff/qNWpl1ZioYvW5gPk/4LHsln+pXop) 2025-08-29 14:25:05.232405 | orchestrator | 2025-08-29 14:25:05.232416 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 14:25:05.232427 | orchestrator | Friday 29 August 2025 14:25:04 +0000 (0:00:01.059) 0:00:26.695 ********* 2025-08-29 14:25:05.232439 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:25:05.232451 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:25:05.232480 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:25:05.232498 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:25:05.232508 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:25:05.232519 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:25:05.232530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:25:05.232540 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:25:05.232551 | orchestrator | 2025-08-29 14:25:05.232562 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 14:25:05.232581 | orchestrator | Friday 29 August 2025 14:25:04 +0000 (0:00:00.155) 0:00:26.850 ********* 2025-08-29 14:25:05.232592 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:25:05.232603 | orchestrator | 2025-08-29 14:25:05.232614 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 14:25:05.232625 | orchestrator | Friday 29 August 2025 14:25:04 +0000 (0:00:00.049) 0:00:26.900 ********* 2025-08-29 14:25:05.232636 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:25:05.232646 | orchestrator | 2025-08-29 14:25:05.232657 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 14:25:05.232668 | orchestrator | Friday 29 August 2025 14:25:04 +0000 (0:00:00.046) 0:00:26.946 ********* 2025-08-29 14:25:05.232678 | orchestrator | changed: [testbed-manager] 2025-08-29 14:25:05.232689 | orchestrator | 2025-08-29 14:25:05.232700 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:25:05.232711 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:25:05.232723 | orchestrator | 2025-08-29 14:25:05.232734 | orchestrator | 2025-08-29 14:25:05.232744 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:25:05.232755 | orchestrator | Friday 29 August 2025 14:25:05 +0000 (0:00:00.496) 0:00:27.442 ********* 2025-08-29 14:25:05.232766 | orchestrator | =============================================================================== 2025-08-29 14:25:05.232777 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.79s 2025-08-29 14:25:05.232788 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.13s 2025-08-29 14:25:05.232800 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.03s 2025-08-29 14:25:05.232810 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:25:05.232821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-08-29 14:25:05.232832 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-08-29 14:25:05.232843 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 14:25:05.232853 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 14:25:05.232864 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-08-29 14:25:05.232874 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-08-29 14:25:05.232885 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-08-29 14:25:05.232896 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-08-29 14:25:05.232928 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-08-29 14:25:05.232939 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-08-29 14:25:05.232950 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-08-29 14:25:05.232961 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-08-29 14:25:05.232971 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-08-29 14:25:05.232982 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-08-29 14:25:05.232993 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-08-29 14:25:05.233004 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-08-29 14:25:05.414708 | orchestrator | + osism apply squid 2025-08-29 14:25:17.154689 | orchestrator | 2025-08-29 14:25:17 | INFO  | Task a34385cf-f62b-4698-ae8e-8bc179a4d953 (squid) was prepared for execution. 2025-08-29 14:25:17.154837 | orchestrator | 2025-08-29 14:25:17 | INFO  | It takes a moment until task a34385cf-f62b-4698-ae8e-8bc179a4d953 (squid) has been started and output is visible here. 2025-08-29 14:27:08.253319 | orchestrator | 2025-08-29 14:27:08.253462 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 14:27:08.253480 | orchestrator | 2025-08-29 14:27:08.253494 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 14:27:08.253506 | orchestrator | Friday 29 August 2025 14:25:20 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-08-29 14:27:08.253517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:27:08.253529 | orchestrator | 2025-08-29 14:27:08.253540 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 14:27:08.253551 | orchestrator | Friday 29 August 2025 14:25:20 +0000 (0:00:00.063) 0:00:00.184 ********* 2025-08-29 14:27:08.253562 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:08.253574 | orchestrator | 2025-08-29 14:27:08.253585 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 14:27:08.253596 | orchestrator | Friday 29 August 2025 14:25:21 +0000 (0:00:01.057) 0:00:01.241 ********* 2025-08-29 14:27:08.253608 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 14:27:08.253619 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 14:27:08.253630 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 14:27:08.253641 | orchestrator | 2025-08-29 14:27:08.253651 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 14:27:08.253662 | orchestrator | Friday 29 August 2025 14:25:22 +0000 (0:00:00.966) 0:00:02.208 ********* 2025-08-29 14:27:08.253673 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 14:27:08.253684 | orchestrator | 2025-08-29 14:27:08.253695 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 14:27:08.253706 | orchestrator | Friday 29 August 2025 14:25:23 +0000 (0:00:00.911) 0:00:03.120 ********* 2025-08-29 14:27:08.253717 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:08.253727 | orchestrator | 2025-08-29 14:27:08.253738 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 14:27:08.253761 | orchestrator | Friday 29 August 2025 14:25:23 +0000 (0:00:00.301) 0:00:03.421 ********* 2025-08-29 14:27:08.253773 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:08.253784 | orchestrator | 2025-08-29 14:27:08.253795 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 14:27:08.253806 | orchestrator | Friday 29 August 2025 14:25:24 +0000 (0:00:00.820) 0:00:04.241 ********* 2025-08-29 14:27:08.253816 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 14:27:08.253828 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:08.253865 | orchestrator | 2025-08-29 14:27:08.253877 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 14:27:08.253888 | orchestrator | Friday 29 August 2025 14:25:55 +0000 (0:00:30.624) 0:00:34.866 ********* 2025-08-29 14:27:08.253899 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:08.253910 | orchestrator | 2025-08-29 14:27:08.253920 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 14:27:08.253931 | orchestrator | Friday 29 August 2025 14:26:07 +0000 (0:00:11.986) 0:00:46.853 ********* 2025-08-29 14:27:08.253942 | orchestrator | Pausing for 60 seconds 2025-08-29 14:27:08.253953 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:08.253964 | orchestrator | 2025-08-29 14:27:08.253975 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 14:27:08.253986 | orchestrator | Friday 29 August 2025 14:27:07 +0000 (0:01:00.067) 0:01:46.920 ********* 2025-08-29 14:27:08.253996 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:08.254007 | orchestrator | 2025-08-29 14:27:08.254096 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 14:27:08.254131 | orchestrator | Friday 29 August 2025 14:27:07 +0000 (0:00:00.067) 0:01:46.988 ********* 2025-08-29 14:27:08.254143 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:08.254153 | orchestrator | 2025-08-29 14:27:08.254164 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:27:08.254175 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:27:08.254186 | orchestrator | 2025-08-29 14:27:08.254196 | orchestrator | 2025-08-29 14:27:08.254207 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:27:08.254217 | orchestrator | Friday 29 August 2025 14:27:08 +0000 (0:00:00.599) 0:01:47.588 ********* 2025-08-29 14:27:08.254228 | orchestrator | =============================================================================== 2025-08-29 14:27:08.254238 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-08-29 14:27:08.254249 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.62s 2025-08-29 14:27:08.254259 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.99s 2025-08-29 14:27:08.254270 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.06s 2025-08-29 14:27:08.254280 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.97s 2025-08-29 14:27:08.254291 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.91s 2025-08-29 14:27:08.254301 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.82s 2025-08-29 14:27:08.254313 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-08-29 14:27:08.254323 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.30s 2025-08-29 14:27:08.254334 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-08-29 14:27:08.254344 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2025-08-29 14:27:08.493109 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 14:27:08.493252 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-08-29 14:27:08.499185 | orchestrator | ++ semver 9.2.0 9.0.0 2025-08-29 14:27:08.566872 | orchestrator | + [[ 1 -lt 0 ]] 2025-08-29 14:27:08.567391 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 14:27:20.476206 | orchestrator | 2025-08-29 14:27:20 | INFO  | Task fc0afcf5-8464-479a-8245-7f7f35c87e7f (operator) was prepared for execution. 2025-08-29 14:27:20.476316 | orchestrator | 2025-08-29 14:27:20 | INFO  | It takes a moment until task fc0afcf5-8464-479a-8245-7f7f35c87e7f (operator) has been started and output is visible here. 2025-08-29 14:27:35.418744 | orchestrator | 2025-08-29 14:27:35.418928 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 14:27:35.418946 | orchestrator | 2025-08-29 14:27:35.418959 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:27:35.418972 | orchestrator | Friday 29 August 2025 14:27:23 +0000 (0:00:00.109) 0:00:00.109 ********* 2025-08-29 14:27:35.418983 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:27:35.418996 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:27:35.419007 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:35.419018 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:35.419029 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:27:35.419039 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:35.419051 | orchestrator | 2025-08-29 14:27:35.419063 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 14:27:35.419074 | orchestrator | Friday 29 August 2025 14:27:27 +0000 (0:00:03.507) 0:00:03.617 ********* 2025-08-29 14:27:35.419085 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:27:35.419096 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:27:35.419107 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:27:35.419117 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:35.419162 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:35.419173 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:35.419184 | orchestrator | 2025-08-29 14:27:35.419196 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 14:27:35.419207 | orchestrator | 2025-08-29 14:27:35.419218 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:27:35.419229 | orchestrator | Friday 29 August 2025 14:27:28 +0000 (0:00:00.649) 0:00:04.267 ********* 2025-08-29 14:27:35.419242 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:27:35.419254 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:27:35.419266 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:27:35.419278 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:35.419291 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:35.419303 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:35.419315 | orchestrator | 2025-08-29 14:27:35.419327 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:27:35.419340 | orchestrator | Friday 29 August 2025 14:27:28 +0000 (0:00:00.132) 0:00:04.399 ********* 2025-08-29 14:27:35.419353 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:27:35.419365 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:27:35.419376 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:27:35.419389 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:35.419401 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:35.419413 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:35.419426 | orchestrator | 2025-08-29 14:27:35.419439 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:27:35.419451 | orchestrator | Friday 29 August 2025 14:27:28 +0000 (0:00:00.115) 0:00:04.515 ********* 2025-08-29 14:27:35.419464 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:35.419477 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:35.419489 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:35.419501 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:35.419513 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:35.419526 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:35.419538 | orchestrator | 2025-08-29 14:27:35.419550 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:27:35.419563 | orchestrator | Friday 29 August 2025 14:27:28 +0000 (0:00:00.621) 0:00:05.137 ********* 2025-08-29 14:27:35.419575 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:35.419588 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:35.419600 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:35.419611 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:35.419622 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:35.419632 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:35.419644 | orchestrator | 2025-08-29 14:27:35.419655 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:27:35.419666 | orchestrator | Friday 29 August 2025 14:27:29 +0000 (0:00:00.781) 0:00:05.919 ********* 2025-08-29 14:27:35.419677 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 14:27:35.419688 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 14:27:35.419699 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 14:27:35.419710 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 14:27:35.419721 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 14:27:35.419732 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 14:27:35.419743 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 14:27:35.419754 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 14:27:35.419765 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 14:27:35.419776 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 14:27:35.419787 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 14:27:35.419798 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 14:27:35.419809 | orchestrator | 2025-08-29 14:27:35.419820 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:27:35.419857 | orchestrator | Friday 29 August 2025 14:27:30 +0000 (0:00:01.191) 0:00:07.110 ********* 2025-08-29 14:27:35.419868 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:35.419879 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:35.419895 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:35.419906 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:35.419916 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:35.419927 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:35.419938 | orchestrator | 2025-08-29 14:27:35.419949 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:27:35.419961 | orchestrator | Friday 29 August 2025 14:27:32 +0000 (0:00:01.245) 0:00:08.356 ********* 2025-08-29 14:27:35.419972 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 14:27:35.419982 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 14:27:35.419993 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 14:27:35.420003 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:27:35.420033 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:27:35.420044 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:27:35.420055 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:27:35.420088 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:27:35.420104 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:27:35.420115 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 14:27:35.420126 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 14:27:35.420137 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 14:27:35.420148 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 14:27:35.420158 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 14:27:35.420169 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 14:27:35.420180 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:27:35.420191 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:27:35.420201 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:27:35.420212 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:27:35.420223 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:27:35.420234 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:27:35.420245 | orchestrator | 2025-08-29 14:27:35.420256 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:27:35.420268 | orchestrator | Friday 29 August 2025 14:27:33 +0000 (0:00:01.233) 0:00:09.589 ********* 2025-08-29 14:27:35.420278 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:27:35.420289 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:27:35.420300 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:27:35.420311 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:35.420322 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:35.420333 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:35.420344 | orchestrator | 2025-08-29 14:27:35.420355 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:27:35.420366 | orchestrator | Friday 29 August 2025 14:27:33 +0000 (0:00:00.160) 0:00:09.750 ********* 2025-08-29 14:27:35.420376 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:35.420387 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:35.420398 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:35.420409 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:35.420426 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:35.420437 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:35.420448 | orchestrator | 2025-08-29 14:27:35.420459 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:27:35.420470 | orchestrator | Friday 29 August 2025 14:27:34 +0000 (0:00:00.584) 0:00:10.335 ********* 2025-08-29 14:27:35.420481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:27:35.420492 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:27:35.420503 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:27:35.420514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:35.420525 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:35.420536 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:35.420547 | orchestrator | 2025-08-29 14:27:35.420558 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:27:35.420569 | orchestrator | Friday 29 August 2025 14:27:34 +0000 (0:00:00.164) 0:00:10.499 ********* 2025-08-29 14:27:35.420580 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 14:27:35.420591 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 14:27:35.420602 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:35.420613 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 14:27:35.420624 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:35.420635 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 14:27:35.420646 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:35.420657 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 14:27:35.420668 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:35.420679 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 14:27:35.420690 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:35.420700 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:35.420712 | orchestrator | 2025-08-29 14:27:35.420723 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:27:35.420734 | orchestrator | Friday 29 August 2025 14:27:34 +0000 (0:00:00.708) 0:00:11.208 ********* 2025-08-29 14:27:35.420745 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:27:35.420756 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:27:35.420766 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:27:35.420777 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:35.420788 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:35.420799 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:35.420810 | orchestrator | 2025-08-29 14:27:35.420821 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:27:35.420858 | orchestrator | Friday 29 August 2025 14:27:35 +0000 (0:00:00.140) 0:00:11.349 ********* 2025-08-29 14:27:35.420869 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:27:35.420879 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:27:35.420890 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:27:35.420901 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:35.420911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:35.420922 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:35.420933 | orchestrator | 2025-08-29 14:27:35.420944 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:27:35.420955 | orchestrator | Friday 29 August 2025 14:27:35 +0000 (0:00:00.149) 0:00:11.498 ********* 2025-08-29 14:27:35.420966 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:27:35.420977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:27:35.420988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:27:35.420999 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:35.421018 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:36.452959 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:36.453087 | orchestrator | 2025-08-29 14:27:36.453102 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:27:36.453116 | orchestrator | Friday 29 August 2025 14:27:35 +0000 (0:00:00.145) 0:00:11.644 ********* 2025-08-29 14:27:36.453157 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:36.453186 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:36.453197 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:36.453208 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:36.453218 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:36.453229 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:36.453240 | orchestrator | 2025-08-29 14:27:36.453251 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:27:36.453262 | orchestrator | Friday 29 August 2025 14:27:36 +0000 (0:00:00.634) 0:00:12.279 ********* 2025-08-29 14:27:36.453272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:27:36.453283 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:27:36.453294 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:27:36.453304 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:36.453315 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:36.453325 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:36.453336 | orchestrator | 2025-08-29 14:27:36.453346 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:27:36.453359 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:27:36.453372 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:27:36.453383 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:27:36.453394 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:27:36.453404 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:27:36.453415 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:27:36.453426 | orchestrator | 2025-08-29 14:27:36.453436 | orchestrator | 2025-08-29 14:27:36.453447 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:27:36.453458 | orchestrator | Friday 29 August 2025 14:27:36 +0000 (0:00:00.198) 0:00:12.477 ********* 2025-08-29 14:27:36.453469 | orchestrator | =============================================================================== 2025-08-29 14:27:36.453479 | orchestrator | Gathering Facts --------------------------------------------------------- 3.51s 2025-08-29 14:27:36.453490 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-08-29 14:27:36.453501 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2025-08-29 14:27:36.453513 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-08-29 14:27:36.453523 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2025-08-29 14:27:36.453534 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-08-29 14:27:36.453545 | orchestrator | Do not require tty for all users ---------------------------------------- 0.65s 2025-08-29 14:27:36.453555 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-08-29 14:27:36.453566 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-08-29 14:27:36.453576 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-08-29 14:27:36.453587 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-08-29 14:27:36.453597 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-08-29 14:27:36.453615 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-08-29 14:27:36.453626 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-08-29 14:27:36.453637 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-08-29 14:27:36.453647 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-08-29 14:27:36.453658 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2025-08-29 14:27:36.453668 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.12s 2025-08-29 14:27:36.702438 | orchestrator | + osism apply --environment custom facts 2025-08-29 14:27:38.477773 | orchestrator | 2025-08-29 14:27:38 | INFO  | Trying to run play facts in environment custom 2025-08-29 14:27:48.589486 | orchestrator | 2025-08-29 14:27:48 | INFO  | Task 916464e6-2347-437e-9a70-0960963cfd11 (facts) was prepared for execution. 2025-08-29 14:27:48.589617 | orchestrator | 2025-08-29 14:27:48 | INFO  | It takes a moment until task 916464e6-2347-437e-9a70-0960963cfd11 (facts) has been started and output is visible here. 2025-08-29 14:28:32.322962 | orchestrator | 2025-08-29 14:28:32.323081 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 14:28:32.323098 | orchestrator | 2025-08-29 14:28:32.323110 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:28:32.323122 | orchestrator | Friday 29 August 2025 14:27:51 +0000 (0:00:00.062) 0:00:00.062 ********* 2025-08-29 14:28:32.323134 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:32.323165 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:32.323178 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:32.323189 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:32.323200 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:32.323211 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:32.323221 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:32.323232 | orchestrator | 2025-08-29 14:28:32.323244 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 14:28:32.323255 | orchestrator | Friday 29 August 2025 14:27:53 +0000 (0:00:01.343) 0:00:01.406 ********* 2025-08-29 14:28:32.323266 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:32.323277 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:32.323288 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:32.323299 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:32.323310 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:32.323321 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:32.323332 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:32.323342 | orchestrator | 2025-08-29 14:28:32.323353 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 14:28:32.323364 | orchestrator | 2025-08-29 14:28:32.323375 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:28:32.323386 | orchestrator | Friday 29 August 2025 14:27:54 +0000 (0:00:01.104) 0:00:02.511 ********* 2025-08-29 14:28:32.323397 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.323408 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.323419 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.323430 | orchestrator | 2025-08-29 14:28:32.323443 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:28:32.323455 | orchestrator | Friday 29 August 2025 14:27:54 +0000 (0:00:00.113) 0:00:02.625 ********* 2025-08-29 14:28:32.323468 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.323481 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.323493 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.323505 | orchestrator | 2025-08-29 14:28:32.323517 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:28:32.323530 | orchestrator | Friday 29 August 2025 14:27:54 +0000 (0:00:00.178) 0:00:02.803 ********* 2025-08-29 14:28:32.323541 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.323577 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.323590 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.323602 | orchestrator | 2025-08-29 14:28:32.323614 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:28:32.323626 | orchestrator | Friday 29 August 2025 14:27:54 +0000 (0:00:00.169) 0:00:02.973 ********* 2025-08-29 14:28:32.323640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:32.323654 | orchestrator | 2025-08-29 14:28:32.323667 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:28:32.323679 | orchestrator | Friday 29 August 2025 14:27:54 +0000 (0:00:00.102) 0:00:03.076 ********* 2025-08-29 14:28:32.323692 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.323705 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.323716 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.323729 | orchestrator | 2025-08-29 14:28:32.323741 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:28:32.323753 | orchestrator | Friday 29 August 2025 14:27:55 +0000 (0:00:00.371) 0:00:03.447 ********* 2025-08-29 14:28:32.323766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:32.323778 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:32.323790 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:32.323827 | orchestrator | 2025-08-29 14:28:32.323838 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:28:32.323849 | orchestrator | Friday 29 August 2025 14:27:55 +0000 (0:00:00.118) 0:00:03.566 ********* 2025-08-29 14:28:32.323859 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:32.323870 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:32.323880 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:32.323891 | orchestrator | 2025-08-29 14:28:32.323902 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:28:32.323924 | orchestrator | Friday 29 August 2025 14:27:56 +0000 (0:00:01.007) 0:00:04.574 ********* 2025-08-29 14:28:32.323936 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.323946 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.323957 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.323968 | orchestrator | 2025-08-29 14:28:32.323978 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:28:32.323989 | orchestrator | Friday 29 August 2025 14:27:56 +0000 (0:00:00.432) 0:00:05.006 ********* 2025-08-29 14:28:32.324000 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:32.324010 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:32.324021 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:32.324031 | orchestrator | 2025-08-29 14:28:32.324042 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:28:32.324053 | orchestrator | Friday 29 August 2025 14:27:57 +0000 (0:00:01.026) 0:00:06.032 ********* 2025-08-29 14:28:32.324063 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:32.324074 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:32.324084 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:32.324095 | orchestrator | 2025-08-29 14:28:32.324105 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 14:28:32.324116 | orchestrator | Friday 29 August 2025 14:28:15 +0000 (0:00:17.112) 0:00:23.144 ********* 2025-08-29 14:28:32.324127 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:32.324137 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:32.324148 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:32.324158 | orchestrator | 2025-08-29 14:28:32.324169 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 14:28:32.324198 | orchestrator | Friday 29 August 2025 14:28:15 +0000 (0:00:00.108) 0:00:23.253 ********* 2025-08-29 14:28:32.324209 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:32.324220 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:32.324238 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:32.324249 | orchestrator | 2025-08-29 14:28:32.324260 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:28:32.324276 | orchestrator | Friday 29 August 2025 14:28:22 +0000 (0:00:07.253) 0:00:30.506 ********* 2025-08-29 14:28:32.324287 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.324298 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.324309 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.324319 | orchestrator | 2025-08-29 14:28:32.324330 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:28:32.324341 | orchestrator | Friday 29 August 2025 14:28:22 +0000 (0:00:00.398) 0:00:30.905 ********* 2025-08-29 14:28:32.324351 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 14:28:32.324362 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 14:28:32.324373 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 14:28:32.324383 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 14:28:32.324394 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 14:28:32.324405 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 14:28:32.324416 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 14:28:32.324426 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 14:28:32.324437 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 14:28:32.324448 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:28:32.324458 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:28:32.324469 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:28:32.324479 | orchestrator | 2025-08-29 14:28:32.324490 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:28:32.324501 | orchestrator | Friday 29 August 2025 14:28:26 +0000 (0:00:03.469) 0:00:34.374 ********* 2025-08-29 14:28:32.324511 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.324522 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.324533 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.324543 | orchestrator | 2025-08-29 14:28:32.324554 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:28:32.324565 | orchestrator | 2025-08-29 14:28:32.324576 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:28:32.324586 | orchestrator | Friday 29 August 2025 14:28:27 +0000 (0:00:01.269) 0:00:35.644 ********* 2025-08-29 14:28:32.324597 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:32.324608 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:32.324618 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:32.324629 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:32.324639 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:32.324650 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:32.324660 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:32.324671 | orchestrator | 2025-08-29 14:28:32.324681 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:28:32.324693 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:28:32.324704 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:28:32.324717 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:28:32.324727 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:28:32.324738 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:28:32.324756 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:28:32.324766 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:28:32.324777 | orchestrator | 2025-08-29 14:28:32.324788 | orchestrator | 2025-08-29 14:28:32.324828 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:28:32.324839 | orchestrator | Friday 29 August 2025 14:28:32 +0000 (0:00:04.785) 0:00:40.429 ********* 2025-08-29 14:28:32.324849 | orchestrator | =============================================================================== 2025-08-29 14:28:32.324860 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.11s 2025-08-29 14:28:32.324870 | orchestrator | Install required packages (Debian) -------------------------------------- 7.25s 2025-08-29 14:28:32.324881 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2025-08-29 14:28:32.324892 | orchestrator | Copy fact files --------------------------------------------------------- 3.47s 2025-08-29 14:28:32.324902 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2025-08-29 14:28:32.324913 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2025-08-29 14:28:32.324930 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2025-08-29 14:28:32.490843 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-08-29 14:28:32.490942 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2025-08-29 14:28:32.490954 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-08-29 14:28:32.490965 | orchestrator | Create custom facts directory ------------------------------------------- 0.40s 2025-08-29 14:28:32.490976 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.37s 2025-08-29 14:28:32.490987 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-08-29 14:28:32.490997 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-08-29 14:28:32.491008 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-08-29 14:28:32.491019 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-08-29 14:28:32.491029 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-08-29 14:28:32.491040 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2025-08-29 14:28:32.733931 | orchestrator | + osism apply bootstrap 2025-08-29 14:28:44.641940 | orchestrator | 2025-08-29 14:28:44 | INFO  | Task 7242ded2-ebbc-4efc-9e22-a2ff46425603 (bootstrap) was prepared for execution. 2025-08-29 14:28:44.642220 | orchestrator | 2025-08-29 14:28:44 | INFO  | It takes a moment until task 7242ded2-ebbc-4efc-9e22-a2ff46425603 (bootstrap) has been started and output is visible here. 2025-08-29 14:28:59.614834 | orchestrator | 2025-08-29 14:28:59.614957 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:28:59.614973 | orchestrator | 2025-08-29 14:28:59.615006 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:28:59.615018 | orchestrator | Friday 29 August 2025 14:28:48 +0000 (0:00:00.143) 0:00:00.143 ********* 2025-08-29 14:28:59.615030 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:59.615042 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:59.615053 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:59.615063 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:59.615074 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:59.615085 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:59.615095 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:59.615128 | orchestrator | 2025-08-29 14:28:59.615140 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:28:59.615151 | orchestrator | 2025-08-29 14:28:59.615162 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:28:59.615173 | orchestrator | Friday 29 August 2025 14:28:48 +0000 (0:00:00.211) 0:00:00.355 ********* 2025-08-29 14:28:59.615183 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:59.615194 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:59.615205 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:59.615215 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:59.615226 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:59.615237 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:59.615247 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:59.615258 | orchestrator | 2025-08-29 14:28:59.615268 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 14:28:59.615279 | orchestrator | 2025-08-29 14:28:59.615290 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:28:59.615301 | orchestrator | Friday 29 August 2025 14:28:52 +0000 (0:00:03.683) 0:00:04.039 ********* 2025-08-29 14:28:59.615312 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:28:59.615324 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:28:59.615337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 14:28:59.615349 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:28:59.615361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 14:28:59.615373 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:28:59.615385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 14:28:59.615397 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:28:59.615409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 14:28:59.615421 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 14:28:59.615433 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:28:59.615445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 14:28:59.615456 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 14:28:59.615468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 14:28:59.615480 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 14:28:59.615492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 14:28:59.615504 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:28:59.615516 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:59.615528 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 14:28:59.615540 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 14:28:59.615552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 14:28:59.615564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:59.615576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 14:28:59.615588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 14:28:59.615600 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 14:28:59.615612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 14:28:59.615624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 14:28:59.615636 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 14:28:59.615648 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 14:28:59.615660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 14:28:59.615678 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 14:28:59.615696 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 14:28:59.615707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 14:28:59.615718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 14:28:59.615728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 14:28:59.615739 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 14:28:59.615750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 14:28:59.615760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 14:28:59.615771 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 14:28:59.615801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 14:28:59.615812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 14:28:59.615823 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:59.615834 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 14:28:59.615844 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 14:28:59.615855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 14:28:59.615865 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 14:28:59.615893 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:59.615904 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 14:28:59.615915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 14:28:59.615925 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 14:28:59.615936 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:59.615946 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 14:28:59.615957 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:59.615967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 14:28:59.615978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 14:28:59.615988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:59.615998 | orchestrator | 2025-08-29 14:28:59.616009 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 14:28:59.616020 | orchestrator | 2025-08-29 14:28:59.616030 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 14:28:59.616041 | orchestrator | Friday 29 August 2025 14:28:52 +0000 (0:00:00.377) 0:00:04.416 ********* 2025-08-29 14:28:59.616051 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:59.616062 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:59.616072 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:59.616083 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:59.616093 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:59.616103 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:59.616114 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:59.616124 | orchestrator | 2025-08-29 14:28:59.616135 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 14:28:59.616146 | orchestrator | Friday 29 August 2025 14:28:53 +0000 (0:00:01.146) 0:00:05.563 ********* 2025-08-29 14:28:59.616156 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:59.616167 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:59.616177 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:59.616188 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:59.616198 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:59.616208 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:59.616219 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:59.616229 | orchestrator | 2025-08-29 14:28:59.616240 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 14:28:59.616251 | orchestrator | Friday 29 August 2025 14:28:54 +0000 (0:00:01.135) 0:00:06.699 ********* 2025-08-29 14:28:59.616262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:59.616282 | orchestrator | 2025-08-29 14:28:59.616293 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 14:28:59.616304 | orchestrator | Friday 29 August 2025 14:28:55 +0000 (0:00:00.244) 0:00:06.943 ********* 2025-08-29 14:28:59.616315 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:59.616325 | orchestrator | changed: [testbed-manager] 2025-08-29 14:28:59.616336 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:59.616346 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:59.616357 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:59.616367 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:59.616378 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:59.616388 | orchestrator | 2025-08-29 14:28:59.616399 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 14:28:59.616410 | orchestrator | Friday 29 August 2025 14:28:57 +0000 (0:00:01.906) 0:00:08.850 ********* 2025-08-29 14:28:59.616420 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:59.616432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:59.616444 | orchestrator | 2025-08-29 14:28:59.616455 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 14:28:59.616465 | orchestrator | Friday 29 August 2025 14:28:57 +0000 (0:00:00.252) 0:00:09.103 ********* 2025-08-29 14:28:59.616476 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:59.616486 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:59.616497 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:59.616507 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:59.616518 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:59.616528 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:59.616539 | orchestrator | 2025-08-29 14:28:59.616549 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 14:28:59.616560 | orchestrator | Friday 29 August 2025 14:28:58 +0000 (0:00:01.005) 0:00:10.108 ********* 2025-08-29 14:28:59.616570 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:59.616580 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:59.616591 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:59.616601 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:59.616611 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:59.616622 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:59.616632 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:59.616642 | orchestrator | 2025-08-29 14:28:59.616653 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 14:28:59.616663 | orchestrator | Friday 29 August 2025 14:28:59 +0000 (0:00:00.637) 0:00:10.746 ********* 2025-08-29 14:28:59.616674 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:59.616684 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:59.616695 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:59.616705 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:59.616715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:59.616726 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:59.616736 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:59.616747 | orchestrator | 2025-08-29 14:28:59.616757 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:28:59.616768 | orchestrator | Friday 29 August 2025 14:28:59 +0000 (0:00:00.447) 0:00:11.194 ********* 2025-08-29 14:28:59.616796 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:59.616807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:59.616823 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:11.678548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:11.678672 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:11.678681 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:11.678711 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:11.678718 | orchestrator | 2025-08-29 14:29:11.678726 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:29:11.678734 | orchestrator | Friday 29 August 2025 14:28:59 +0000 (0:00:00.219) 0:00:11.414 ********* 2025-08-29 14:29:11.678743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:11.678767 | orchestrator | 2025-08-29 14:29:11.678818 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:29:11.678826 | orchestrator | Friday 29 August 2025 14:28:59 +0000 (0:00:00.282) 0:00:11.696 ********* 2025-08-29 14:29:11.678833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:11.678839 | orchestrator | 2025-08-29 14:29:11.678845 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:29:11.678852 | orchestrator | Friday 29 August 2025 14:29:00 +0000 (0:00:00.300) 0:00:11.997 ********* 2025-08-29 14:29:11.678858 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.678865 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.678871 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.678877 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.678883 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.678890 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.678896 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.678902 | orchestrator | 2025-08-29 14:29:11.678908 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:29:11.678914 | orchestrator | Friday 29 August 2025 14:29:01 +0000 (0:00:01.292) 0:00:13.289 ********* 2025-08-29 14:29:11.678920 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:29:11.678927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:11.678932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:11.678938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:11.678944 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:11.678950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:11.678956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:11.678962 | orchestrator | 2025-08-29 14:29:11.678968 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:29:11.678974 | orchestrator | Friday 29 August 2025 14:29:01 +0000 (0:00:00.221) 0:00:13.510 ********* 2025-08-29 14:29:11.678980 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.678986 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.678993 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.678999 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679005 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679011 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679017 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679023 | orchestrator | 2025-08-29 14:29:11.679029 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:29:11.679035 | orchestrator | Friday 29 August 2025 14:29:02 +0000 (0:00:00.523) 0:00:14.033 ********* 2025-08-29 14:29:11.679082 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:29:11.679090 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:11.679097 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:11.679105 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:11.679111 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:11.679118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:11.679125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:11.679131 | orchestrator | 2025-08-29 14:29:11.679139 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:29:11.679152 | orchestrator | Friday 29 August 2025 14:29:02 +0000 (0:00:00.212) 0:00:14.246 ********* 2025-08-29 14:29:11.679159 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679166 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:11.679172 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:11.679179 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:11.679186 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:11.679192 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:11.679203 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:11.679210 | orchestrator | 2025-08-29 14:29:11.679216 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:29:11.679223 | orchestrator | Friday 29 August 2025 14:29:03 +0000 (0:00:00.527) 0:00:14.773 ********* 2025-08-29 14:29:11.679230 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679237 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:11.679244 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:11.679251 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:11.679257 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:11.679264 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:11.679271 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:11.679278 | orchestrator | 2025-08-29 14:29:11.679285 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:29:11.679292 | orchestrator | Friday 29 August 2025 14:29:04 +0000 (0:00:01.131) 0:00:15.904 ********* 2025-08-29 14:29:11.679299 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679305 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679312 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679319 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679326 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.679332 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.679339 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679346 | orchestrator | 2025-08-29 14:29:11.679353 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:29:11.679360 | orchestrator | Friday 29 August 2025 14:29:05 +0000 (0:00:01.219) 0:00:17.123 ********* 2025-08-29 14:29:11.679383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:11.679390 | orchestrator | 2025-08-29 14:29:11.679397 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:29:11.679404 | orchestrator | Friday 29 August 2025 14:29:05 +0000 (0:00:00.334) 0:00:17.458 ********* 2025-08-29 14:29:11.679411 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:29:11.679418 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:11.679424 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:11.679430 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:11.679435 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:11.679441 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:11.679447 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:11.679453 | orchestrator | 2025-08-29 14:29:11.679459 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:29:11.679465 | orchestrator | Friday 29 August 2025 14:29:07 +0000 (0:00:01.452) 0:00:18.911 ********* 2025-08-29 14:29:11.679471 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679477 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.679483 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.679489 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679495 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679501 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679507 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679513 | orchestrator | 2025-08-29 14:29:11.679519 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:29:11.679525 | orchestrator | Friday 29 August 2025 14:29:07 +0000 (0:00:00.230) 0:00:19.141 ********* 2025-08-29 14:29:11.679536 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679542 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.679548 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.679554 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679560 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679566 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679572 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679578 | orchestrator | 2025-08-29 14:29:11.679584 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:29:11.679590 | orchestrator | Friday 29 August 2025 14:29:07 +0000 (0:00:00.214) 0:00:19.355 ********* 2025-08-29 14:29:11.679596 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679602 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.679608 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.679613 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679619 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679625 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679631 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679637 | orchestrator | 2025-08-29 14:29:11.679643 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:29:11.679649 | orchestrator | Friday 29 August 2025 14:29:07 +0000 (0:00:00.216) 0:00:19.572 ********* 2025-08-29 14:29:11.679656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:11.679664 | orchestrator | 2025-08-29 14:29:11.679670 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:29:11.679677 | orchestrator | Friday 29 August 2025 14:29:08 +0000 (0:00:00.298) 0:00:19.870 ********* 2025-08-29 14:29:11.679682 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679688 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.679694 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.679700 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679706 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679712 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679718 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679724 | orchestrator | 2025-08-29 14:29:11.679730 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:29:11.679736 | orchestrator | Friday 29 August 2025 14:29:08 +0000 (0:00:00.565) 0:00:20.436 ********* 2025-08-29 14:29:11.679742 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:29:11.679748 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:11.679754 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:11.679760 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:11.679766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:11.679786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:11.679792 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:11.679798 | orchestrator | 2025-08-29 14:29:11.679808 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:29:11.679815 | orchestrator | Friday 29 August 2025 14:29:08 +0000 (0:00:00.237) 0:00:20.673 ********* 2025-08-29 14:29:11.679821 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679827 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:11.679833 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:11.679839 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679845 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679851 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:11.679857 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679863 | orchestrator | 2025-08-29 14:29:11.679869 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:29:11.679875 | orchestrator | Friday 29 August 2025 14:29:10 +0000 (0:00:01.064) 0:00:21.738 ********* 2025-08-29 14:29:11.679881 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679891 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:11.679897 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:11.679903 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:11.679909 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679915 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:11.679921 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:11.679927 | orchestrator | 2025-08-29 14:29:11.679933 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:29:11.679939 | orchestrator | Friday 29 August 2025 14:29:10 +0000 (0:00:00.608) 0:00:22.346 ********* 2025-08-29 14:29:11.679945 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:11.679951 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:11.679957 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:11.679963 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:11.679974 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.799722 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.799895 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:51.799914 | orchestrator | 2025-08-29 14:29:51.799927 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:29:51.799940 | orchestrator | Friday 29 August 2025 14:29:11 +0000 (0:00:01.028) 0:00:23.374 ********* 2025-08-29 14:29:51.799951 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.799962 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.799973 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.799984 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:51.799994 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:51.800006 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:51.800017 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:51.800027 | orchestrator | 2025-08-29 14:29:51.800038 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 14:29:51.800049 | orchestrator | Friday 29 August 2025 14:29:28 +0000 (0:00:17.331) 0:00:40.706 ********* 2025-08-29 14:29:51.800060 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.800071 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.800081 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.800092 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.800103 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.800113 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.800123 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.800134 | orchestrator | 2025-08-29 14:29:51.800145 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 14:29:51.800156 | orchestrator | Friday 29 August 2025 14:29:29 +0000 (0:00:00.221) 0:00:40.927 ********* 2025-08-29 14:29:51.800167 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.800178 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.800189 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.800199 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.800210 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.800221 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.800231 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.800242 | orchestrator | 2025-08-29 14:29:51.800255 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 14:29:51.800267 | orchestrator | Friday 29 August 2025 14:29:29 +0000 (0:00:00.266) 0:00:41.194 ********* 2025-08-29 14:29:51.800279 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.800291 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.800303 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.800314 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.800325 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.800336 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.800346 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.800357 | orchestrator | 2025-08-29 14:29:51.800368 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 14:29:51.800379 | orchestrator | Friday 29 August 2025 14:29:29 +0000 (0:00:00.224) 0:00:41.418 ********* 2025-08-29 14:29:51.800392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:51.800428 | orchestrator | 2025-08-29 14:29:51.800440 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 14:29:51.800451 | orchestrator | Friday 29 August 2025 14:29:29 +0000 (0:00:00.286) 0:00:41.705 ********* 2025-08-29 14:29:51.800461 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.800472 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.800483 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.800493 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.800504 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.800514 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.800524 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.800535 | orchestrator | 2025-08-29 14:29:51.800545 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 14:29:51.800556 | orchestrator | Friday 29 August 2025 14:29:31 +0000 (0:00:01.647) 0:00:43.352 ********* 2025-08-29 14:29:51.800567 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:51.800577 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:51.800588 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:51.800598 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:51.800609 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:51.800619 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:51.800630 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:51.800640 | orchestrator | 2025-08-29 14:29:51.800651 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 14:29:51.800662 | orchestrator | Friday 29 August 2025 14:29:32 +0000 (0:00:01.112) 0:00:44.465 ********* 2025-08-29 14:29:51.800672 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.800683 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.800694 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.800704 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.800715 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.800725 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.800736 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.800746 | orchestrator | 2025-08-29 14:29:51.800782 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 14:29:51.800793 | orchestrator | Friday 29 August 2025 14:29:33 +0000 (0:00:00.826) 0:00:45.291 ********* 2025-08-29 14:29:51.800805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:51.800818 | orchestrator | 2025-08-29 14:29:51.800829 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 14:29:51.800840 | orchestrator | Friday 29 August 2025 14:29:33 +0000 (0:00:00.257) 0:00:45.549 ********* 2025-08-29 14:29:51.800851 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:51.800862 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:51.800872 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:51.800883 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:51.800893 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:51.800904 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:51.800914 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:51.800925 | orchestrator | 2025-08-29 14:29:51.800953 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 14:29:51.800965 | orchestrator | Friday 29 August 2025 14:29:34 +0000 (0:00:01.052) 0:00:46.601 ********* 2025-08-29 14:29:51.800975 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:29:51.800986 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:51.800997 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:51.801007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:51.801018 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:51.801037 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:51.801048 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:51.801058 | orchestrator | 2025-08-29 14:29:51.801069 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 14:29:51.801080 | orchestrator | Friday 29 August 2025 14:29:35 +0000 (0:00:00.311) 0:00:46.913 ********* 2025-08-29 14:29:51.801090 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:51.801101 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:51.801111 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:51.801122 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:51.801132 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:51.801143 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:51.801153 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:51.801164 | orchestrator | 2025-08-29 14:29:51.801174 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 14:29:51.801185 | orchestrator | Friday 29 August 2025 14:29:46 +0000 (0:00:11.769) 0:00:58.682 ********* 2025-08-29 14:29:51.801196 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.801206 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.801217 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.801227 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.801238 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.801248 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.801259 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.801269 | orchestrator | 2025-08-29 14:29:51.801280 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 14:29:51.801291 | orchestrator | Friday 29 August 2025 14:29:47 +0000 (0:00:00.679) 0:00:59.362 ********* 2025-08-29 14:29:51.801301 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.801312 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.801322 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.801333 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.801343 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.801353 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.801364 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.801374 | orchestrator | 2025-08-29 14:29:51.801385 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 14:29:51.801396 | orchestrator | Friday 29 August 2025 14:29:48 +0000 (0:00:00.926) 0:01:00.288 ********* 2025-08-29 14:29:51.801406 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.801436 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.801447 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.801457 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.801468 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.801478 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.801489 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.801500 | orchestrator | 2025-08-29 14:29:51.801510 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 14:29:51.801521 | orchestrator | Friday 29 August 2025 14:29:48 +0000 (0:00:00.243) 0:01:00.531 ********* 2025-08-29 14:29:51.801532 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.801542 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.801553 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.801563 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.801574 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.801584 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.801595 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.801605 | orchestrator | 2025-08-29 14:29:51.801616 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 14:29:51.801626 | orchestrator | Friday 29 August 2025 14:29:49 +0000 (0:00:00.216) 0:01:00.748 ********* 2025-08-29 14:29:51.801637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:51.801655 | orchestrator | 2025-08-29 14:29:51.801666 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 14:29:51.801677 | orchestrator | Friday 29 August 2025 14:29:49 +0000 (0:00:00.268) 0:01:01.016 ********* 2025-08-29 14:29:51.801687 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.801703 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.801713 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.801724 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.801734 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.801745 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.801782 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.801793 | orchestrator | 2025-08-29 14:29:51.801804 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 14:29:51.801815 | orchestrator | Friday 29 August 2025 14:29:51 +0000 (0:00:01.701) 0:01:02.718 ********* 2025-08-29 14:29:51.801825 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:51.801836 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:51.801846 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:51.801857 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:51.801867 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:51.801878 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:51.801888 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:51.801899 | orchestrator | 2025-08-29 14:29:51.801909 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 14:29:51.801920 | orchestrator | Friday 29 August 2025 14:29:51 +0000 (0:00:00.572) 0:01:03.290 ********* 2025-08-29 14:29:51.801931 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:51.801941 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:51.801952 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:51.801962 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:51.801973 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:51.801983 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:51.801994 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:51.802004 | orchestrator | 2025-08-29 14:29:51.802072 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 14:32:14.353823 | orchestrator | Friday 29 August 2025 14:29:51 +0000 (0:00:00.205) 0:01:03.496 ********* 2025-08-29 14:32:14.353950 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:14.353972 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:14.353987 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:14.354003 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:14.354094 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:14.354112 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:14.354129 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:14.354145 | orchestrator | 2025-08-29 14:32:14.354163 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 14:32:14.354180 | orchestrator | Friday 29 August 2025 14:29:53 +0000 (0:00:01.301) 0:01:04.798 ********* 2025-08-29 14:32:14.354197 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:14.354215 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:14.354231 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:14.354248 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:14.354264 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:14.354281 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:14.354299 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:14.354316 | orchestrator | 2025-08-29 14:32:14.354334 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 14:32:14.354352 | orchestrator | Friday 29 August 2025 14:29:55 +0000 (0:00:01.943) 0:01:06.742 ********* 2025-08-29 14:32:14.354370 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:14.354387 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:14.354404 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:14.354422 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:14.354439 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:14.354457 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:14.354508 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:14.354526 | orchestrator | 2025-08-29 14:32:14.354543 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 14:32:14.354560 | orchestrator | Friday 29 August 2025 14:29:57 +0000 (0:00:02.801) 0:01:09.543 ********* 2025-08-29 14:32:14.354577 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:14.354594 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:14.354611 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:14.354628 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:14.354644 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:14.354660 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:14.354677 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:14.354749 | orchestrator | 2025-08-29 14:32:14.354768 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 14:32:14.354784 | orchestrator | Friday 29 August 2025 14:30:34 +0000 (0:00:36.941) 0:01:46.484 ********* 2025-08-29 14:32:14.354801 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:14.354817 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:14.354834 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:14.354851 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:14.354867 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:14.354884 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:14.354900 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:14.354916 | orchestrator | 2025-08-29 14:32:14.354933 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 14:32:14.354948 | orchestrator | Friday 29 August 2025 14:31:54 +0000 (0:01:19.389) 0:03:05.873 ********* 2025-08-29 14:32:14.354963 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:14.354980 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:14.354997 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:14.355013 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:14.355030 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:14.355046 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:14.355062 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:14.355078 | orchestrator | 2025-08-29 14:32:14.355095 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 14:32:14.355112 | orchestrator | Friday 29 August 2025 14:31:56 +0000 (0:00:01.859) 0:03:07.733 ********* 2025-08-29 14:32:14.355129 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:14.355144 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:14.355160 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:14.355175 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:14.355191 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:14.355207 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:14.355223 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:14.355239 | orchestrator | 2025-08-29 14:32:14.355255 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 14:32:14.355271 | orchestrator | Friday 29 August 2025 14:32:08 +0000 (0:00:12.281) 0:03:20.014 ********* 2025-08-29 14:32:14.355319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 14:32:14.355351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 14:32:14.355419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 14:32:14.355441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 14:32:14.355458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 14:32:14.355476 | orchestrator | 2025-08-29 14:32:14.355493 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 14:32:14.355509 | orchestrator | Friday 29 August 2025 14:32:08 +0000 (0:00:00.412) 0:03:20.426 ********* 2025-08-29 14:32:14.355526 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:32:14.355543 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:14.355559 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:32:14.355576 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:14.355592 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:32:14.355608 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:14.355625 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:32:14.355641 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:14.355658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:32:14.355674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:32:14.355725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:32:14.355741 | orchestrator | 2025-08-29 14:32:14.355756 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 14:32:14.355772 | orchestrator | Friday 29 August 2025 14:32:09 +0000 (0:00:00.691) 0:03:21.118 ********* 2025-08-29 14:32:14.355789 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:32:14.355807 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:32:14.355824 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:32:14.355840 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:32:14.355857 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:32:14.355874 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:32:14.355890 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:32:14.355905 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:32:14.355920 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:32:14.355936 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:32:14.355965 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:14.355978 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:32:14.355990 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:32:14.356005 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:32:14.356018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:32:14.356030 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:32:14.356043 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:32:14.356055 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:32:14.356068 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:32:14.356082 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:32:14.356095 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:32:14.356121 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:32:17.613309 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:32:17.613441 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:32:17.613455 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:32:17.613467 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:32:17.613477 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:32:17.613489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:32:17.613499 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:32:17.613510 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:32:17.613521 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:17.613532 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:32:17.613543 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:32:17.613553 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:17.613562 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:32:17.613573 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:32:17.613583 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:32:17.613593 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:32:17.613603 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:32:17.613613 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:32:17.613623 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:32:17.613641 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:32:17.613657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:32:17.613673 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:17.613749 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:32:17.613765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:32:17.613782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:32:17.613799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:32:17.613816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:32:17.613833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:32:17.613845 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:32:17.613878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:32:17.613889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:32:17.613905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:32:17.613916 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:32:17.613927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:32:17.613937 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:32:17.613948 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:32:17.613959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:32:17.613969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:32:17.613979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:32:17.613990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:32:17.614001 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:32:17.614013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:32:17.614120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:32:17.614164 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:32:17.614181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:32:17.614199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:32:17.614217 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:32:17.614233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:32:17.614250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:32:17.614267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:32:17.614283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:32:17.614298 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:32:17.614313 | orchestrator | 2025-08-29 14:32:17.614329 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 14:32:17.614344 | orchestrator | Friday 29 August 2025 14:32:14 +0000 (0:00:04.928) 0:03:26.047 ********* 2025-08-29 14:32:17.614374 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614389 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614418 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614433 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614463 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:32:17.614480 | orchestrator | 2025-08-29 14:32:17.614498 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 14:32:17.614514 | orchestrator | Friday 29 August 2025 14:32:15 +0000 (0:00:01.644) 0:03:27.691 ********* 2025-08-29 14:32:17.614535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:32:17.614552 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:32:17.614569 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.614586 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:17.614603 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:32:17.614619 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:32:17.614636 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.614653 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.614671 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:32:17.614711 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:32:17.614728 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:32:17.614745 | orchestrator | 2025-08-29 14:32:17.614760 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 14:32:17.614776 | orchestrator | Friday 29 August 2025 14:32:16 +0000 (0:00:00.642) 0:03:28.333 ********* 2025-08-29 14:32:17.614792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:32:17.614819 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:32:17.614837 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.614855 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:32:17.614871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:17.614886 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.614901 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:32:17.614917 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.614934 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:32:17.614952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:32:17.614968 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:32:17.614983 | orchestrator | 2025-08-29 14:32:17.614997 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 14:32:17.615012 | orchestrator | Friday 29 August 2025 14:32:17 +0000 (0:00:00.673) 0:03:29.007 ********* 2025-08-29 14:32:17.615026 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.615041 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:17.615069 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.615084 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.615100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:17.615127 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:29.648774 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:29.648909 | orchestrator | 2025-08-29 14:32:29.648920 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 14:32:29.648930 | orchestrator | Friday 29 August 2025 14:32:17 +0000 (0:00:00.308) 0:03:29.316 ********* 2025-08-29 14:32:29.648937 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:29.648946 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:29.648953 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:29.648959 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:29.648966 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:29.648972 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:29.648979 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:29.648986 | orchestrator | 2025-08-29 14:32:29.648992 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 14:32:29.648998 | orchestrator | Friday 29 August 2025 14:32:23 +0000 (0:00:06.192) 0:03:35.508 ********* 2025-08-29 14:32:29.649005 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 14:32:29.649012 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:29.649019 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 14:32:29.649025 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:29.649031 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 14:32:29.649038 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:29.649044 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 14:32:29.649050 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 14:32:29.649057 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:29.649063 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 14:32:29.649069 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:29.649075 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:29.649081 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 14:32:29.649088 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:29.649094 | orchestrator | 2025-08-29 14:32:29.649100 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 14:32:29.649108 | orchestrator | Friday 29 August 2025 14:32:24 +0000 (0:00:00.309) 0:03:35.818 ********* 2025-08-29 14:32:29.649114 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 14:32:29.649120 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 14:32:29.649127 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 14:32:29.649133 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 14:32:29.649139 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 14:32:29.649145 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 14:32:29.649151 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 14:32:29.649158 | orchestrator | 2025-08-29 14:32:29.649164 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 14:32:29.649170 | orchestrator | Friday 29 August 2025 14:32:25 +0000 (0:00:00.977) 0:03:36.795 ********* 2025-08-29 14:32:29.649179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:29.649188 | orchestrator | 2025-08-29 14:32:29.649194 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 14:32:29.649201 | orchestrator | Friday 29 August 2025 14:32:25 +0000 (0:00:00.402) 0:03:37.197 ********* 2025-08-29 14:32:29.649207 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:29.649213 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:29.649219 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:29.649250 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:29.649256 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:29.649264 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:29.649270 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:29.649277 | orchestrator | 2025-08-29 14:32:29.649283 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 14:32:29.649290 | orchestrator | Friday 29 August 2025 14:32:26 +0000 (0:00:01.397) 0:03:38.595 ********* 2025-08-29 14:32:29.649297 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:29.649303 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:29.649310 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:29.649316 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:29.649323 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:29.649329 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:29.649335 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:29.649342 | orchestrator | 2025-08-29 14:32:29.649365 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 14:32:29.649373 | orchestrator | Friday 29 August 2025 14:32:27 +0000 (0:00:00.626) 0:03:39.221 ********* 2025-08-29 14:32:29.649380 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:29.649387 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:29.649394 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:29.649401 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:29.649407 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:29.649414 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:29.649421 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:29.649427 | orchestrator | 2025-08-29 14:32:29.649434 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 14:32:29.649441 | orchestrator | Friday 29 August 2025 14:32:28 +0000 (0:00:00.582) 0:03:39.804 ********* 2025-08-29 14:32:29.649447 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:29.649454 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:29.649460 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:29.649467 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:29.649474 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:29.649480 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:29.649487 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:29.649494 | orchestrator | 2025-08-29 14:32:29.649501 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 14:32:29.649508 | orchestrator | Friday 29 August 2025 14:32:28 +0000 (0:00:00.589) 0:03:40.394 ********* 2025-08-29 14:32:29.649538 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476545.670066, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649549 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476599.7947931, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649556 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476605.2660863, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649570 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476595.9253018, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649577 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476567.8693085, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649585 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476599.760458, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649592 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476570.6688836, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:29.649611 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822603 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822834 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822877 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822890 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822907 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822919 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:32:54.822931 | orchestrator | 2025-08-29 14:32:54.822944 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 14:32:54.822957 | orchestrator | Friday 29 August 2025 14:32:29 +0000 (0:00:00.948) 0:03:41.342 ********* 2025-08-29 14:32:54.822968 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:54.822980 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:54.822991 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:54.823001 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:54.823011 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:54.823022 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:54.823033 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:54.823043 | orchestrator | 2025-08-29 14:32:54.823054 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 14:32:54.823065 | orchestrator | Friday 29 August 2025 14:32:30 +0000 (0:00:01.161) 0:03:42.503 ********* 2025-08-29 14:32:54.823076 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:54.823087 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:54.823099 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:54.823111 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:54.823143 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:54.823156 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:54.823168 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:54.823180 | orchestrator | 2025-08-29 14:32:54.823193 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 14:32:54.823214 | orchestrator | Friday 29 August 2025 14:32:31 +0000 (0:00:01.156) 0:03:43.660 ********* 2025-08-29 14:32:54.823226 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:54.823237 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:54.823248 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:54.823258 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:54.823269 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:54.823280 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:54.823291 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:54.823301 | orchestrator | 2025-08-29 14:32:54.823312 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 14:32:54.823323 | orchestrator | Friday 29 August 2025 14:32:33 +0000 (0:00:01.135) 0:03:44.796 ********* 2025-08-29 14:32:54.823334 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:54.823349 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:54.823368 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:54.823387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:54.823405 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:54.823421 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:54.823438 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:54.823454 | orchestrator | 2025-08-29 14:32:54.823470 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 14:32:54.823490 | orchestrator | Friday 29 August 2025 14:32:33 +0000 (0:00:00.316) 0:03:45.112 ********* 2025-08-29 14:32:54.823509 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:54.823528 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:54.823546 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:54.823563 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:54.823580 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:54.823598 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:54.823610 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:54.823621 | orchestrator | 2025-08-29 14:32:54.823632 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 14:32:54.823642 | orchestrator | Friday 29 August 2025 14:32:34 +0000 (0:00:00.764) 0:03:45.877 ********* 2025-08-29 14:32:54.823656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:54.823721 | orchestrator | 2025-08-29 14:32:54.823733 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 14:32:54.823744 | orchestrator | Friday 29 August 2025 14:32:34 +0000 (0:00:00.411) 0:03:46.288 ********* 2025-08-29 14:32:54.823755 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:54.823766 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:54.823777 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:54.823787 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:54.823797 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:54.823808 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:54.823818 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:54.823829 | orchestrator | 2025-08-29 14:32:54.823839 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 14:32:54.823865 | orchestrator | Friday 29 August 2025 14:32:42 +0000 (0:00:08.249) 0:03:54.537 ********* 2025-08-29 14:32:54.823877 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:54.823888 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:54.823908 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:54.823919 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:54.823930 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:54.823940 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:54.823951 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:54.823961 | orchestrator | 2025-08-29 14:32:54.823972 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 14:32:54.823994 | orchestrator | Friday 29 August 2025 14:32:44 +0000 (0:00:01.299) 0:03:55.837 ********* 2025-08-29 14:32:54.824005 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:54.824015 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:54.824033 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:54.824044 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:54.824054 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:54.824065 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:54.824075 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:54.824086 | orchestrator | 2025-08-29 14:32:54.824096 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 14:32:54.824107 | orchestrator | Friday 29 August 2025 14:32:45 +0000 (0:00:00.990) 0:03:56.827 ********* 2025-08-29 14:32:54.824118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:54.824129 | orchestrator | 2025-08-29 14:32:54.824140 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 14:32:54.824151 | orchestrator | Friday 29 August 2025 14:32:45 +0000 (0:00:00.552) 0:03:57.380 ********* 2025-08-29 14:32:54.824161 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:54.824172 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:54.824183 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:54.824193 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:54.824203 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:54.824214 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:54.824224 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:54.824234 | orchestrator | 2025-08-29 14:32:54.824245 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 14:32:54.824256 | orchestrator | Friday 29 August 2025 14:32:54 +0000 (0:00:08.517) 0:04:05.898 ********* 2025-08-29 14:32:54.824267 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:54.824277 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:54.824287 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:54.824309 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.955491 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.955620 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.955666 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.955674 | orchestrator | 2025-08-29 14:34:05.955683 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 14:34:05.955692 | orchestrator | Friday 29 August 2025 14:32:54 +0000 (0:00:00.620) 0:04:06.518 ********* 2025-08-29 14:34:05.955700 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:05.955707 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:05.955714 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.955721 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:05.955727 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.955735 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.955739 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.955743 | orchestrator | 2025-08-29 14:34:05.955750 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 14:34:05.955757 | orchestrator | Friday 29 August 2025 14:32:55 +0000 (0:00:01.130) 0:04:07.649 ********* 2025-08-29 14:34:05.955763 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:05.955769 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:05.955774 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:05.955780 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.955787 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.955793 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.955800 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.955806 | orchestrator | 2025-08-29 14:34:05.955812 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 14:34:05.955818 | orchestrator | Friday 29 August 2025 14:32:57 +0000 (0:00:01.103) 0:04:08.752 ********* 2025-08-29 14:34:05.955850 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:05.955859 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:05.955865 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:05.955871 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:05.955877 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:05.955883 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:05.955889 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:05.955895 | orchestrator | 2025-08-29 14:34:05.955901 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 14:34:05.955909 | orchestrator | Friday 29 August 2025 14:32:57 +0000 (0:00:00.328) 0:04:09.081 ********* 2025-08-29 14:34:05.955916 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:05.955922 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:05.955928 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:05.955935 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:05.955941 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:05.955947 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:05.955953 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:05.955959 | orchestrator | 2025-08-29 14:34:05.955965 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 14:34:05.955972 | orchestrator | Friday 29 August 2025 14:32:57 +0000 (0:00:00.309) 0:04:09.390 ********* 2025-08-29 14:34:05.955979 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:05.955985 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:05.955991 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:05.955997 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:05.956003 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:05.956009 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:05.956015 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:05.956020 | orchestrator | 2025-08-29 14:34:05.956024 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 14:34:05.956029 | orchestrator | Friday 29 August 2025 14:32:57 +0000 (0:00:00.292) 0:04:09.683 ********* 2025-08-29 14:34:05.956033 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:05.956037 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:05.956041 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:05.956046 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:05.956050 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:05.956054 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:05.956058 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:05.956064 | orchestrator | 2025-08-29 14:34:05.956071 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 14:34:05.956077 | orchestrator | Friday 29 August 2025 14:33:03 +0000 (0:00:05.740) 0:04:15.423 ********* 2025-08-29 14:34:05.956103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:05.956113 | orchestrator | 2025-08-29 14:34:05.956120 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 14:34:05.956127 | orchestrator | Friday 29 August 2025 14:33:04 +0000 (0:00:00.448) 0:04:15.872 ********* 2025-08-29 14:34:05.956133 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956140 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 14:34:05.956146 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956153 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 14:34:05.956159 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:05.956165 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956171 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 14:34:05.956178 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:05.956184 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956199 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 14:34:05.956205 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:05.956212 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956218 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 14:34:05.956224 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:05.956231 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956237 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 14:34:05.956244 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:05.956269 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:05.956275 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 14:34:05.956281 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 14:34:05.956288 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:05.956294 | orchestrator | 2025-08-29 14:34:05.956300 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 14:34:05.956306 | orchestrator | Friday 29 August 2025 14:33:04 +0000 (0:00:00.449) 0:04:16.322 ********* 2025-08-29 14:34:05.956313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:05.956319 | orchestrator | 2025-08-29 14:34:05.956326 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 14:34:05.956332 | orchestrator | Friday 29 August 2025 14:33:05 +0000 (0:00:00.473) 0:04:16.795 ********* 2025-08-29 14:34:05.956338 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 14:34:05.956345 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 14:34:05.956351 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:05.956358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:05.956364 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 14:34:05.956371 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 14:34:05.956377 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:05.956383 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 14:34:05.956389 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:05.956396 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:05.956402 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 14:34:05.956408 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:05.956414 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 14:34:05.956420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:05.956426 | orchestrator | 2025-08-29 14:34:05.956432 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 14:34:05.956438 | orchestrator | Friday 29 August 2025 14:33:05 +0000 (0:00:00.389) 0:04:17.185 ********* 2025-08-29 14:34:05.956444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:05.956451 | orchestrator | 2025-08-29 14:34:05.956457 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 14:34:05.956463 | orchestrator | Friday 29 August 2025 14:33:06 +0000 (0:00:00.628) 0:04:17.814 ********* 2025-08-29 14:34:05.956469 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.956475 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.956481 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.956488 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:05.956493 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:05.956499 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:05.956515 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.956519 | orchestrator | 2025-08-29 14:34:05.956522 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 14:34:05.956526 | orchestrator | Friday 29 August 2025 14:33:41 +0000 (0:00:35.501) 0:04:53.315 ********* 2025-08-29 14:34:05.956530 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:05.956533 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:05.956537 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.956541 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.956545 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:05.956548 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.956552 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.956556 | orchestrator | 2025-08-29 14:34:05.956560 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 14:34:05.956564 | orchestrator | Friday 29 August 2025 14:33:50 +0000 (0:00:08.467) 0:05:01.783 ********* 2025-08-29 14:34:05.956568 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.956571 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:05.956575 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.956579 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.956583 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:05.956586 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:05.956590 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.956594 | orchestrator | 2025-08-29 14:34:05.956598 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 14:34:05.956601 | orchestrator | Friday 29 August 2025 14:33:58 +0000 (0:00:07.967) 0:05:09.751 ********* 2025-08-29 14:34:05.956605 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:05.956609 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:05.956613 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:05.956616 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:05.956621 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:05.956673 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:05.956681 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:05.956687 | orchestrator | 2025-08-29 14:34:05.956693 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 14:34:05.956700 | orchestrator | Friday 29 August 2025 14:33:59 +0000 (0:00:01.898) 0:05:11.649 ********* 2025-08-29 14:34:05.956706 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:05.956712 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:05.956720 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:05.956724 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:05.956727 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:05.956731 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:05.956735 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:05.956738 | orchestrator | 2025-08-29 14:34:05.956742 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 14:34:05.956750 | orchestrator | Friday 29 August 2025 14:34:05 +0000 (0:00:06.000) 0:05:17.650 ********* 2025-08-29 14:34:17.519179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:17.519289 | orchestrator | 2025-08-29 14:34:17.519307 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 14:34:17.519321 | orchestrator | Friday 29 August 2025 14:34:06 +0000 (0:00:00.425) 0:05:18.076 ********* 2025-08-29 14:34:17.519332 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:17.519344 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:17.519353 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:17.519359 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:17.519366 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:17.519372 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:17.519379 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:17.519402 | orchestrator | 2025-08-29 14:34:17.519409 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 14:34:17.519416 | orchestrator | Friday 29 August 2025 14:34:07 +0000 (0:00:00.742) 0:05:18.818 ********* 2025-08-29 14:34:17.519426 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:17.519439 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:17.519449 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:17.519459 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:17.519468 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:17.519478 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:17.519488 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:17.519498 | orchestrator | 2025-08-29 14:34:17.519508 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 14:34:17.519518 | orchestrator | Friday 29 August 2025 14:34:08 +0000 (0:00:01.741) 0:05:20.560 ********* 2025-08-29 14:34:17.519530 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:17.519540 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:17.519550 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:17.519556 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:17.519562 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:17.519578 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:17.519584 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:17.519590 | orchestrator | 2025-08-29 14:34:17.519597 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 14:34:17.519603 | orchestrator | Friday 29 August 2025 14:34:09 +0000 (0:00:00.796) 0:05:21.357 ********* 2025-08-29 14:34:17.519614 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:17.519651 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:17.519661 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:17.519671 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:17.519681 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:17.519691 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:17.519701 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:17.519712 | orchestrator | 2025-08-29 14:34:17.519721 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 14:34:17.519733 | orchestrator | Friday 29 August 2025 14:34:09 +0000 (0:00:00.306) 0:05:21.664 ********* 2025-08-29 14:34:17.519744 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:17.519751 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:17.519761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:17.519772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:17.519782 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:17.519792 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:17.519804 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:17.519811 | orchestrator | 2025-08-29 14:34:17.519818 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 14:34:17.519825 | orchestrator | Friday 29 August 2025 14:34:10 +0000 (0:00:00.458) 0:05:22.122 ********* 2025-08-29 14:34:17.519832 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:17.519839 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:17.519848 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:17.519859 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:17.519869 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:17.519879 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:17.519891 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:17.519898 | orchestrator | 2025-08-29 14:34:17.519911 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 14:34:17.519921 | orchestrator | Friday 29 August 2025 14:34:10 +0000 (0:00:00.322) 0:05:22.445 ********* 2025-08-29 14:34:17.519933 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:17.519943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:17.519953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:17.519963 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:17.519974 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:17.519993 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:17.520005 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:17.520015 | orchestrator | 2025-08-29 14:34:17.520025 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 14:34:17.520033 | orchestrator | Friday 29 August 2025 14:34:11 +0000 (0:00:00.319) 0:05:22.764 ********* 2025-08-29 14:34:17.520040 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:17.520047 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:17.520054 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:17.520061 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:17.520070 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:17.520080 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:17.520090 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:17.520101 | orchestrator | 2025-08-29 14:34:17.520110 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 14:34:17.520122 | orchestrator | Friday 29 August 2025 14:34:11 +0000 (0:00:00.326) 0:05:23.091 ********* 2025-08-29 14:34:17.520128 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:34:17.520134 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520140 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:34:17.520146 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520152 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:34:17.520346 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520356 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:34:17.520362 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520369 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:34:17.520375 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520396 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:34:17.520403 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520409 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:34:17.520415 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:34:17.520421 | orchestrator | 2025-08-29 14:34:17.520427 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 14:34:17.520433 | orchestrator | Friday 29 August 2025 14:34:11 +0000 (0:00:00.301) 0:05:23.393 ********* 2025-08-29 14:34:17.520443 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:34:17.520454 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520460 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:34:17.520470 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520481 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:34:17.520487 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520493 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:34:17.520499 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520509 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:34:17.520519 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520528 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:34:17.520537 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520543 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:34:17.520549 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:34:17.520555 | orchestrator | 2025-08-29 14:34:17.520561 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 14:34:17.520567 | orchestrator | Friday 29 August 2025 14:34:12 +0000 (0:00:00.480) 0:05:23.873 ********* 2025-08-29 14:34:17.520574 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:17.520584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:17.520593 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:17.520603 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:17.520612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:17.520618 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:17.520652 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:17.520659 | orchestrator | 2025-08-29 14:34:17.520665 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 14:34:17.520679 | orchestrator | Friday 29 August 2025 14:34:12 +0000 (0:00:00.295) 0:05:24.169 ********* 2025-08-29 14:34:17.520686 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:17.520695 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:17.520705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:17.520715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:17.520724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:17.520734 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:17.520740 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:17.520746 | orchestrator | 2025-08-29 14:34:17.520752 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 14:34:17.520758 | orchestrator | Friday 29 August 2025 14:34:12 +0000 (0:00:00.286) 0:05:24.456 ********* 2025-08-29 14:34:17.520766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:17.520775 | orchestrator | 2025-08-29 14:34:17.520781 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 14:34:17.520787 | orchestrator | Friday 29 August 2025 14:34:13 +0000 (0:00:00.445) 0:05:24.901 ********* 2025-08-29 14:34:17.520793 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:17.520799 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:17.520805 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:17.520811 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:17.520817 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:17.520823 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:17.520829 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:17.520835 | orchestrator | 2025-08-29 14:34:17.520841 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 14:34:17.520847 | orchestrator | Friday 29 August 2025 14:34:14 +0000 (0:00:00.907) 0:05:25.809 ********* 2025-08-29 14:34:17.520853 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:17.520859 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:17.520869 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:17.520876 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:17.520882 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:17.520887 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:17.520893 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:17.520899 | orchestrator | 2025-08-29 14:34:17.520906 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 14:34:17.520913 | orchestrator | Friday 29 August 2025 14:34:16 +0000 (0:00:02.846) 0:05:28.655 ********* 2025-08-29 14:34:17.520920 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 14:34:17.520926 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 14:34:17.520933 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 14:34:17.520939 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 14:34:17.520945 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 14:34:17.520951 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 14:34:17.520957 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:17.520963 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 14:34:17.520969 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 14:34:17.520975 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 14:34:17.520981 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:17.520986 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 14:34:17.520992 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 14:34:17.520999 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 14:34:17.521005 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:17.521010 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 14:34:17.521021 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 14:34:17.521033 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 14:35:18.913881 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:18.913998 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 14:35:18.914012 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 14:35:18.914077 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:18.914087 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 14:35:18.914097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:18.914107 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 14:35:18.914117 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 14:35:18.914126 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 14:35:18.914136 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:18.914146 | orchestrator | 2025-08-29 14:35:18.914157 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 14:35:18.914168 | orchestrator | Friday 29 August 2025 14:34:17 +0000 (0:00:00.769) 0:05:29.425 ********* 2025-08-29 14:35:18.914178 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.914188 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.914197 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914207 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914217 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914226 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.914236 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.914245 | orchestrator | 2025-08-29 14:35:18.914255 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 14:35:18.914264 | orchestrator | Friday 29 August 2025 14:34:24 +0000 (0:00:06.589) 0:05:36.015 ********* 2025-08-29 14:35:18.914274 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914283 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914293 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.914302 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.914312 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.914321 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914331 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.914340 | orchestrator | 2025-08-29 14:35:18.914350 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 14:35:18.914359 | orchestrator | Friday 29 August 2025 14:34:25 +0000 (0:00:01.048) 0:05:37.064 ********* 2025-08-29 14:35:18.914370 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.914379 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914389 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.914398 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914407 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914417 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.914426 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.914436 | orchestrator | 2025-08-29 14:35:18.914445 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 14:35:18.914455 | orchestrator | Friday 29 August 2025 14:34:33 +0000 (0:00:08.082) 0:05:45.146 ********* 2025-08-29 14:35:18.914464 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914474 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914483 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:18.914493 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.914502 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.914512 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914521 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.914531 | orchestrator | 2025-08-29 14:35:18.914540 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 14:35:18.914550 | orchestrator | Friday 29 August 2025 14:34:36 +0000 (0:00:03.344) 0:05:48.490 ********* 2025-08-29 14:35:18.914559 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.914611 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914622 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914631 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.914641 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.914650 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914659 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.914668 | orchestrator | 2025-08-29 14:35:18.914678 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 14:35:18.914702 | orchestrator | Friday 29 August 2025 14:34:38 +0000 (0:00:01.641) 0:05:50.132 ********* 2025-08-29 14:35:18.914712 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.914721 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914731 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914741 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.914750 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.914759 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914769 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.914778 | orchestrator | 2025-08-29 14:35:18.914787 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 14:35:18.914797 | orchestrator | Friday 29 August 2025 14:34:39 +0000 (0:00:01.370) 0:05:51.503 ********* 2025-08-29 14:35:18.914806 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:18.914816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:18.914835 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:18.914844 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:18.914854 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:18.914863 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:18.914872 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:18.914887 | orchestrator | 2025-08-29 14:35:18.914903 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 14:35:18.914919 | orchestrator | Friday 29 August 2025 14:34:40 +0000 (0:00:00.712) 0:05:52.215 ********* 2025-08-29 14:35:18.914934 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.914948 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.914963 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.914978 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.914993 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.915007 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.915021 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.915035 | orchestrator | 2025-08-29 14:35:18.915050 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 14:35:18.915064 | orchestrator | Friday 29 August 2025 14:34:50 +0000 (0:00:10.085) 0:06:02.301 ********* 2025-08-29 14:35:18.915079 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:18.915117 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.915133 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.915148 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.915162 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.915177 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.915192 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.915207 | orchestrator | 2025-08-29 14:35:18.915222 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 14:35:18.915238 | orchestrator | Friday 29 August 2025 14:34:51 +0000 (0:00:01.002) 0:06:03.304 ********* 2025-08-29 14:35:18.915253 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.915267 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.915283 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.915297 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.915312 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.915327 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.915343 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.915358 | orchestrator | 2025-08-29 14:35:18.915373 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 14:35:18.915402 | orchestrator | Friday 29 August 2025 14:35:01 +0000 (0:00:09.542) 0:06:12.847 ********* 2025-08-29 14:35:18.915417 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.915432 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.915449 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.915463 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.915478 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.915493 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.915508 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.915523 | orchestrator | 2025-08-29 14:35:18.915538 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 14:35:18.915552 | orchestrator | Friday 29 August 2025 14:35:12 +0000 (0:00:11.068) 0:06:23.916 ********* 2025-08-29 14:35:18.915567 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 14:35:18.915582 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 14:35:18.915638 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 14:35:18.915655 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 14:35:18.915670 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 14:35:18.915685 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 14:35:18.915701 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 14:35:18.915715 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 14:35:18.915731 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 14:35:18.915747 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 14:35:18.915794 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 14:35:18.915810 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 14:35:18.915825 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 14:35:18.915840 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 14:35:18.915855 | orchestrator | 2025-08-29 14:35:18.915871 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 14:35:18.915887 | orchestrator | Friday 29 August 2025 14:35:13 +0000 (0:00:01.246) 0:06:25.162 ********* 2025-08-29 14:35:18.915903 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:18.915918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:18.915934 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:18.915949 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:18.915965 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:18.915980 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:18.915996 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:18.916012 | orchestrator | 2025-08-29 14:35:18.916028 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 14:35:18.916043 | orchestrator | Friday 29 August 2025 14:35:14 +0000 (0:00:00.629) 0:06:25.791 ********* 2025-08-29 14:35:18.916058 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:18.916073 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:18.916088 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:18.916116 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:18.916133 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:18.916149 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:18.916165 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:18.916181 | orchestrator | 2025-08-29 14:35:18.916198 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 14:35:18.916209 | orchestrator | Friday 29 August 2025 14:35:17 +0000 (0:00:03.895) 0:06:29.686 ********* 2025-08-29 14:35:18.916219 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:18.916228 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:18.916237 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:18.916246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:18.916256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:18.916276 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:18.916286 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:18.916295 | orchestrator | 2025-08-29 14:35:18.916305 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 14:35:18.916315 | orchestrator | Friday 29 August 2025 14:35:18 +0000 (0:00:00.560) 0:06:30.247 ********* 2025-08-29 14:35:18.916325 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 14:35:18.916335 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 14:35:18.916344 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:18.916353 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 14:35:18.916363 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 14:35:18.916372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:18.916382 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 14:35:18.916391 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 14:35:18.916401 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:18.916410 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 14:35:18.916433 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 14:35:39.206633 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:39.206763 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 14:35:39.206784 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 14:35:39.206801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:39.206816 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 14:35:39.206833 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 14:35:39.206848 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:39.206864 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 14:35:39.206881 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 14:35:39.206897 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:39.206915 | orchestrator | 2025-08-29 14:35:39.206934 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 14:35:39.206950 | orchestrator | Friday 29 August 2025 14:35:19 +0000 (0:00:00.712) 0:06:30.959 ********* 2025-08-29 14:35:39.206966 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:39.206981 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:39.206996 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:39.207011 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:39.207028 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:39.207044 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:39.207065 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:39.207083 | orchestrator | 2025-08-29 14:35:39.207100 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 14:35:39.207116 | orchestrator | Friday 29 August 2025 14:35:19 +0000 (0:00:00.569) 0:06:31.529 ********* 2025-08-29 14:35:39.207133 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:39.207151 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:39.207170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:39.207186 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:39.207202 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:39.207218 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:39.207236 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:39.207256 | orchestrator | 2025-08-29 14:35:39.207273 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 14:35:39.207292 | orchestrator | Friday 29 August 2025 14:35:20 +0000 (0:00:00.507) 0:06:32.036 ********* 2025-08-29 14:35:39.207311 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:39.207330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:39.207347 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:39.207398 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:39.207415 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:39.207433 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:39.207449 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:39.207465 | orchestrator | 2025-08-29 14:35:39.207481 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 14:35:39.207496 | orchestrator | Friday 29 August 2025 14:35:21 +0000 (0:00:00.756) 0:06:32.793 ********* 2025-08-29 14:35:39.207510 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.207527 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:39.207541 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:39.207556 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:39.207570 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:39.207610 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:39.207627 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:39.207644 | orchestrator | 2025-08-29 14:35:39.207660 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 14:35:39.207677 | orchestrator | Friday 29 August 2025 14:35:22 +0000 (0:00:01.909) 0:06:34.703 ********* 2025-08-29 14:35:39.207694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:39.207712 | orchestrator | 2025-08-29 14:35:39.207729 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 14:35:39.207745 | orchestrator | Friday 29 August 2025 14:35:23 +0000 (0:00:00.888) 0:06:35.592 ********* 2025-08-29 14:35:39.207760 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.207779 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:39.207802 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:39.207891 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:39.207912 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:39.207929 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:39.207945 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:39.207956 | orchestrator | 2025-08-29 14:35:39.207965 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 14:35:39.207974 | orchestrator | Friday 29 August 2025 14:35:24 +0000 (0:00:00.869) 0:06:36.461 ********* 2025-08-29 14:35:39.207983 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.207992 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:39.208000 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:39.208009 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:39.208018 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:39.208026 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:39.208034 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:39.208043 | orchestrator | 2025-08-29 14:35:39.208051 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 14:35:39.208060 | orchestrator | Friday 29 August 2025 14:35:25 +0000 (0:00:01.135) 0:06:37.597 ********* 2025-08-29 14:35:39.208069 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.208077 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:39.208086 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:39.208094 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:39.208103 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:39.208111 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:39.208139 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:39.208148 | orchestrator | 2025-08-29 14:35:39.208156 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 14:35:39.208166 | orchestrator | Friday 29 August 2025 14:35:27 +0000 (0:00:01.450) 0:06:39.048 ********* 2025-08-29 14:35:39.208205 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:39.208227 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:39.208242 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:39.208256 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:39.208312 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:39.208330 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:39.208339 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:39.208351 | orchestrator | 2025-08-29 14:35:39.208365 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 14:35:39.208383 | orchestrator | Friday 29 August 2025 14:35:28 +0000 (0:00:01.405) 0:06:40.453 ********* 2025-08-29 14:35:39.208405 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.208417 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:39.208430 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:39.208443 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:39.208455 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:39.208467 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:39.208480 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:39.208492 | orchestrator | 2025-08-29 14:35:39.208505 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 14:35:39.208518 | orchestrator | Friday 29 August 2025 14:35:30 +0000 (0:00:01.304) 0:06:41.758 ********* 2025-08-29 14:35:39.208531 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:39.208545 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:39.208559 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:39.208572 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:39.208651 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:39.208665 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:39.208673 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:39.208682 | orchestrator | 2025-08-29 14:35:39.208690 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 14:35:39.208699 | orchestrator | Friday 29 August 2025 14:35:31 +0000 (0:00:01.516) 0:06:43.275 ********* 2025-08-29 14:35:39.208708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:39.208718 | orchestrator | 2025-08-29 14:35:39.208727 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 14:35:39.208735 | orchestrator | Friday 29 August 2025 14:35:32 +0000 (0:00:01.177) 0:06:44.452 ********* 2025-08-29 14:35:39.208744 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:39.208752 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.208760 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:39.208772 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:39.208786 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:39.208803 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:39.208824 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:39.208838 | orchestrator | 2025-08-29 14:35:39.208851 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 14:35:39.208866 | orchestrator | Friday 29 August 2025 14:35:34 +0000 (0:00:01.421) 0:06:45.873 ********* 2025-08-29 14:35:39.208879 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.208887 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:39.208896 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:39.208905 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:39.208913 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:39.208922 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:39.208930 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:39.208938 | orchestrator | 2025-08-29 14:35:39.208947 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 14:35:39.208955 | orchestrator | Friday 29 August 2025 14:35:35 +0000 (0:00:01.132) 0:06:47.005 ********* 2025-08-29 14:35:39.208962 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.208970 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:39.208977 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:39.208985 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:39.208993 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:39.209000 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:39.209027 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:39.209035 | orchestrator | 2025-08-29 14:35:39.209042 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 14:35:39.209057 | orchestrator | Friday 29 August 2025 14:35:36 +0000 (0:00:01.389) 0:06:48.395 ********* 2025-08-29 14:35:39.209065 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:39.209073 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:39.209080 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:39.209088 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:39.209095 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:39.209103 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:39.209110 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:39.209118 | orchestrator | 2025-08-29 14:35:39.209126 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 14:35:39.209133 | orchestrator | Friday 29 August 2025 14:35:37 +0000 (0:00:01.207) 0:06:49.602 ********* 2025-08-29 14:35:39.209142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:39.209150 | orchestrator | 2025-08-29 14:35:39.209158 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:35:39.209165 | orchestrator | Friday 29 August 2025 14:35:38 +0000 (0:00:00.972) 0:06:50.575 ********* 2025-08-29 14:35:39.209173 | orchestrator | 2025-08-29 14:35:39.209183 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:35:39.209196 | orchestrator | Friday 29 August 2025 14:35:38 +0000 (0:00:00.040) 0:06:50.615 ********* 2025-08-29 14:35:39.209217 | orchestrator | 2025-08-29 14:35:39.209230 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:35:39.209243 | orchestrator | Friday 29 August 2025 14:35:38 +0000 (0:00:00.039) 0:06:50.655 ********* 2025-08-29 14:35:39.209257 | orchestrator | 2025-08-29 14:35:39.209268 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:35:39.209276 | orchestrator | Friday 29 August 2025 14:35:38 +0000 (0:00:00.051) 0:06:50.706 ********* 2025-08-29 14:35:39.209283 | orchestrator | 2025-08-29 14:35:39.209302 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:36:05.757136 | orchestrator | Friday 29 August 2025 14:35:39 +0000 (0:00:00.042) 0:06:50.749 ********* 2025-08-29 14:36:05.757216 | orchestrator | 2025-08-29 14:36:05.757223 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:36:05.757228 | orchestrator | Friday 29 August 2025 14:35:39 +0000 (0:00:00.043) 0:06:50.793 ********* 2025-08-29 14:36:05.757232 | orchestrator | 2025-08-29 14:36:05.757236 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:36:05.757240 | orchestrator | Friday 29 August 2025 14:35:39 +0000 (0:00:00.048) 0:06:50.841 ********* 2025-08-29 14:36:05.757244 | orchestrator | 2025-08-29 14:36:05.757248 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:36:05.757252 | orchestrator | Friday 29 August 2025 14:35:39 +0000 (0:00:00.045) 0:06:50.887 ********* 2025-08-29 14:36:05.757256 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:05.757261 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:05.757265 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:05.757269 | orchestrator | 2025-08-29 14:36:05.757272 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 14:36:05.757276 | orchestrator | Friday 29 August 2025 14:35:40 +0000 (0:00:01.444) 0:06:52.332 ********* 2025-08-29 14:36:05.757280 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:05.757285 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:05.757289 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:05.757292 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:05.757296 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:05.757300 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:05.757303 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:05.757325 | orchestrator | 2025-08-29 14:36:05.757329 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 14:36:05.757333 | orchestrator | Friday 29 August 2025 14:35:41 +0000 (0:00:01.347) 0:06:53.679 ********* 2025-08-29 14:36:05.757337 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:05.757340 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:05.757344 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:05.757347 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:05.757351 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:05.757355 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:05.757358 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:05.757362 | orchestrator | 2025-08-29 14:36:05.757366 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 14:36:05.757369 | orchestrator | Friday 29 August 2025 14:35:43 +0000 (0:00:01.139) 0:06:54.818 ********* 2025-08-29 14:36:05.757373 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:05.757376 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:05.757380 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:05.757384 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:05.757387 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:05.757391 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:05.757395 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:05.757398 | orchestrator | 2025-08-29 14:36:05.757402 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 14:36:05.757406 | orchestrator | Friday 29 August 2025 14:35:45 +0000 (0:00:02.375) 0:06:57.194 ********* 2025-08-29 14:36:05.757410 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:05.757413 | orchestrator | 2025-08-29 14:36:05.757417 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 14:36:05.757421 | orchestrator | Friday 29 August 2025 14:35:45 +0000 (0:00:00.117) 0:06:57.311 ********* 2025-08-29 14:36:05.757425 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:05.757428 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:05.757432 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:05.757436 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:05.757439 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:05.757443 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:05.757446 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:05.757450 | orchestrator | 2025-08-29 14:36:05.757454 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 14:36:05.757469 | orchestrator | Friday 29 August 2025 14:35:46 +0000 (0:00:01.072) 0:06:58.384 ********* 2025-08-29 14:36:05.757473 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:05.757476 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:05.757480 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:05.757484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:05.757487 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:05.757491 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:05.757494 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:05.757498 | orchestrator | 2025-08-29 14:36:05.757502 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 14:36:05.757505 | orchestrator | Friday 29 August 2025 14:35:47 +0000 (0:00:00.743) 0:06:59.127 ********* 2025-08-29 14:36:05.757510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:05.757515 | orchestrator | 2025-08-29 14:36:05.757519 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 14:36:05.757523 | orchestrator | Friday 29 August 2025 14:35:48 +0000 (0:00:00.955) 0:07:00.083 ********* 2025-08-29 14:36:05.757527 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:05.757530 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:05.757538 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:05.757542 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:05.757546 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:05.757549 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:05.757553 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:05.757557 | orchestrator | 2025-08-29 14:36:05.757560 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 14:36:05.757564 | orchestrator | Friday 29 August 2025 14:35:49 +0000 (0:00:00.847) 0:07:00.930 ********* 2025-08-29 14:36:05.757605 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 14:36:05.757612 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 14:36:05.757631 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 14:36:05.757635 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 14:36:05.757639 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 14:36:05.757643 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 14:36:05.757647 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 14:36:05.757651 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 14:36:05.757654 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 14:36:05.757658 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 14:36:05.757662 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 14:36:05.757665 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 14:36:05.757669 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 14:36:05.757675 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 14:36:05.757681 | orchestrator | 2025-08-29 14:36:05.757687 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 14:36:05.757693 | orchestrator | Friday 29 August 2025 14:35:51 +0000 (0:00:02.635) 0:07:03.566 ********* 2025-08-29 14:36:05.757699 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:05.757706 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:05.757710 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:05.757715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:05.757719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:05.757723 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:05.757727 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:05.757732 | orchestrator | 2025-08-29 14:36:05.757736 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 14:36:05.757740 | orchestrator | Friday 29 August 2025 14:35:52 +0000 (0:00:00.556) 0:07:04.122 ********* 2025-08-29 14:36:05.757746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:05.757752 | orchestrator | 2025-08-29 14:36:05.757756 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 14:36:05.757761 | orchestrator | Friday 29 August 2025 14:35:53 +0000 (0:00:00.828) 0:07:04.951 ********* 2025-08-29 14:36:05.757765 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:05.757769 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:05.757773 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:05.757778 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:05.757782 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:05.757786 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:05.757790 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:05.757794 | orchestrator | 2025-08-29 14:36:05.757799 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 14:36:05.757803 | orchestrator | Friday 29 August 2025 14:35:54 +0000 (0:00:01.076) 0:07:06.027 ********* 2025-08-29 14:36:05.757807 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:05.757816 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:05.757820 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:05.757825 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:05.757829 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:05.757833 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:05.757837 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:05.757843 | orchestrator | 2025-08-29 14:36:05.757849 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 14:36:05.757855 | orchestrator | Friday 29 August 2025 14:35:55 +0000 (0:00:00.984) 0:07:07.011 ********* 2025-08-29 14:36:05.757862 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:05.757868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:05.757874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:05.757879 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:05.757890 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:05.757896 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:05.757902 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:05.757908 | orchestrator | 2025-08-29 14:36:05.757913 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 14:36:05.757919 | orchestrator | Friday 29 August 2025 14:35:55 +0000 (0:00:00.542) 0:07:07.554 ********* 2025-08-29 14:36:05.757926 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:05.757932 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:05.757938 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:05.757944 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:05.757950 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:05.757957 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:05.757963 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:05.757969 | orchestrator | 2025-08-29 14:36:05.757974 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 14:36:05.757979 | orchestrator | Friday 29 August 2025 14:35:57 +0000 (0:00:01.557) 0:07:09.112 ********* 2025-08-29 14:36:05.757983 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:05.757987 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:05.757991 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:05.757996 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:05.758000 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:05.758004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:05.758008 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:05.758012 | orchestrator | 2025-08-29 14:36:05.758052 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 14:36:05.758056 | orchestrator | Friday 29 August 2025 14:35:57 +0000 (0:00:00.549) 0:07:09.661 ********* 2025-08-29 14:36:05.758061 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:05.758065 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:05.758070 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:05.758074 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:05.758080 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:05.758087 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:05.758093 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:05.758099 | orchestrator | 2025-08-29 14:36:05.758112 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 14:36:39.115253 | orchestrator | Friday 29 August 2025 14:36:05 +0000 (0:00:07.777) 0:07:17.439 ********* 2025-08-29 14:36:39.115383 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.115400 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:39.115413 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:39.115424 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:39.115435 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:39.115446 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:39.115457 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:39.115468 | orchestrator | 2025-08-29 14:36:39.115479 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 14:36:39.115491 | orchestrator | Friday 29 August 2025 14:36:07 +0000 (0:00:01.329) 0:07:18.769 ********* 2025-08-29 14:36:39.115530 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:39.115541 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:39.115609 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:39.115622 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:39.115633 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:39.115643 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:39.115654 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.115664 | orchestrator | 2025-08-29 14:36:39.115675 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 14:36:39.115686 | orchestrator | Friday 29 August 2025 14:36:09 +0000 (0:00:02.001) 0:07:20.770 ********* 2025-08-29 14:36:39.115697 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.115707 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:39.115718 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:39.115728 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:39.115739 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:39.115750 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:39.115762 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:39.115774 | orchestrator | 2025-08-29 14:36:39.115786 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:36:39.115797 | orchestrator | Friday 29 August 2025 14:36:10 +0000 (0:00:01.773) 0:07:22.543 ********* 2025-08-29 14:36:39.115809 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.115821 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.115833 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.115845 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.115858 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.115870 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.115882 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.115894 | orchestrator | 2025-08-29 14:36:39.115906 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:36:39.115918 | orchestrator | Friday 29 August 2025 14:36:11 +0000 (0:00:01.105) 0:07:23.649 ********* 2025-08-29 14:36:39.115930 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:39.115942 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:39.115953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:39.115965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:39.115977 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:39.115989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:39.116000 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:39.116013 | orchestrator | 2025-08-29 14:36:39.116024 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 14:36:39.116037 | orchestrator | Friday 29 August 2025 14:36:12 +0000 (0:00:00.901) 0:07:24.550 ********* 2025-08-29 14:36:39.116049 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:39.116061 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:39.116073 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:39.116085 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:39.116097 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:39.116108 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:39.116122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:39.116142 | orchestrator | 2025-08-29 14:36:39.116173 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 14:36:39.116194 | orchestrator | Friday 29 August 2025 14:36:13 +0000 (0:00:00.568) 0:07:25.119 ********* 2025-08-29 14:36:39.116213 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.116231 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.116251 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.116288 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.116300 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.116310 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.116321 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.116332 | orchestrator | 2025-08-29 14:36:39.116342 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 14:36:39.116363 | orchestrator | Friday 29 August 2025 14:36:14 +0000 (0:00:00.765) 0:07:25.884 ********* 2025-08-29 14:36:39.116374 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.116384 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.116395 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.116405 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.116416 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.116426 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.116437 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.116447 | orchestrator | 2025-08-29 14:36:39.116458 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 14:36:39.116469 | orchestrator | Friday 29 August 2025 14:36:14 +0000 (0:00:00.585) 0:07:26.470 ********* 2025-08-29 14:36:39.116479 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.116490 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.116500 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.116511 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.116521 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.116532 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.116542 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.116612 | orchestrator | 2025-08-29 14:36:39.116625 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 14:36:39.116635 | orchestrator | Friday 29 August 2025 14:36:15 +0000 (0:00:00.606) 0:07:27.076 ********* 2025-08-29 14:36:39.116646 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.116657 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.116667 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.116678 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.116688 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.116698 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.116708 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.116719 | orchestrator | 2025-08-29 14:36:39.116730 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 14:36:39.116759 | orchestrator | Friday 29 August 2025 14:36:21 +0000 (0:00:05.658) 0:07:32.734 ********* 2025-08-29 14:36:39.116770 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:39.116781 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:39.116791 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:39.116802 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:39.116813 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:39.116823 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:39.116834 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:39.116844 | orchestrator | 2025-08-29 14:36:39.116855 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 14:36:39.116865 | orchestrator | Friday 29 August 2025 14:36:21 +0000 (0:00:00.539) 0:07:33.273 ********* 2025-08-29 14:36:39.116878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:39.116892 | orchestrator | 2025-08-29 14:36:39.116903 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 14:36:39.116914 | orchestrator | Friday 29 August 2025 14:36:22 +0000 (0:00:01.163) 0:07:34.437 ********* 2025-08-29 14:36:39.116924 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.116935 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.116945 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.116956 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.116966 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.116977 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.116987 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.116998 | orchestrator | 2025-08-29 14:36:39.117009 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 14:36:39.117019 | orchestrator | Friday 29 August 2025 14:36:24 +0000 (0:00:01.805) 0:07:36.243 ********* 2025-08-29 14:36:39.117038 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.117048 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.117059 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.117069 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.117080 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.117090 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.117100 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.117111 | orchestrator | 2025-08-29 14:36:39.117125 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 14:36:39.117143 | orchestrator | Friday 29 August 2025 14:36:25 +0000 (0:00:01.218) 0:07:37.461 ********* 2025-08-29 14:36:39.117161 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:39.117177 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:39.117194 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:39.117211 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:39.117231 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:39.117248 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:39.117267 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:39.117280 | orchestrator | 2025-08-29 14:36:39.117290 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 14:36:39.117301 | orchestrator | Friday 29 August 2025 14:36:26 +0000 (0:00:01.118) 0:07:38.579 ********* 2025-08-29 14:36:39.117312 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117325 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117336 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117347 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117358 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117368 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117379 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:36:39.117389 | orchestrator | 2025-08-29 14:36:39.117400 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 14:36:39.117411 | orchestrator | Friday 29 August 2025 14:36:28 +0000 (0:00:01.815) 0:07:40.395 ********* 2025-08-29 14:36:39.117422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:39.117433 | orchestrator | 2025-08-29 14:36:39.117444 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 14:36:39.117455 | orchestrator | Friday 29 August 2025 14:36:29 +0000 (0:00:00.951) 0:07:41.347 ********* 2025-08-29 14:36:39.117465 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:39.117476 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:39.117487 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:39.117497 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:39.117508 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:39.117518 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:39.117528 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:39.117539 | orchestrator | 2025-08-29 14:36:39.117572 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 14:36:39.117592 | orchestrator | Friday 29 August 2025 14:36:39 +0000 (0:00:09.455) 0:07:50.802 ********* 2025-08-29 14:36:56.625734 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:56.625849 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:56.625864 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:56.625876 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:56.625887 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:56.625897 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:56.625908 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:56.625919 | orchestrator | 2025-08-29 14:36:56.625931 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 14:36:56.625944 | orchestrator | Friday 29 August 2025 14:36:40 +0000 (0:00:01.857) 0:07:52.660 ********* 2025-08-29 14:36:56.625955 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:56.625966 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:56.626094 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:56.626113 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:56.626124 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:56.626134 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:56.626144 | orchestrator | 2025-08-29 14:36:56.626156 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 14:36:56.626166 | orchestrator | Friday 29 August 2025 14:36:42 +0000 (0:00:01.311) 0:07:53.971 ********* 2025-08-29 14:36:56.626177 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:56.626188 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:56.626201 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:56.626214 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:56.626226 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:56.626255 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:56.626267 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:56.626279 | orchestrator | 2025-08-29 14:36:56.626292 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 14:36:56.626304 | orchestrator | 2025-08-29 14:36:56.626317 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 14:36:56.626329 | orchestrator | Friday 29 August 2025 14:36:43 +0000 (0:00:01.680) 0:07:55.651 ********* 2025-08-29 14:36:56.626341 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:56.626353 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:56.626365 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:56.626376 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:56.626388 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:56.626400 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:56.626413 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:56.626425 | orchestrator | 2025-08-29 14:36:56.626437 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 14:36:56.626449 | orchestrator | 2025-08-29 14:36:56.626461 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 14:36:56.626474 | orchestrator | Friday 29 August 2025 14:36:44 +0000 (0:00:00.607) 0:07:56.259 ********* 2025-08-29 14:36:56.626486 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:56.626498 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:56.626508 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:56.626519 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:56.626530 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:56.626560 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:56.626571 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:56.626582 | orchestrator | 2025-08-29 14:36:56.626593 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 14:36:56.626604 | orchestrator | Friday 29 August 2025 14:36:45 +0000 (0:00:01.446) 0:07:57.706 ********* 2025-08-29 14:36:56.626615 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:56.626625 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:56.626636 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:56.626646 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:56.626657 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:56.626668 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:56.626702 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:56.626713 | orchestrator | 2025-08-29 14:36:56.626724 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 14:36:56.626735 | orchestrator | Friday 29 August 2025 14:36:47 +0000 (0:00:01.500) 0:07:59.206 ********* 2025-08-29 14:36:56.626745 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:56.626756 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:56.626771 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:56.626783 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:56.626793 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:56.626804 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:56.626814 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:56.626825 | orchestrator | 2025-08-29 14:36:56.626836 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 14:36:56.626846 | orchestrator | Friday 29 August 2025 14:36:48 +0000 (0:00:01.171) 0:08:00.378 ********* 2025-08-29 14:36:56.626857 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:56.626867 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:56.626878 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:56.626888 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:56.626899 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:56.626909 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:56.626920 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:56.626930 | orchestrator | 2025-08-29 14:36:56.626941 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 14:36:56.626952 | orchestrator | 2025-08-29 14:36:56.626962 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 14:36:56.626973 | orchestrator | Friday 29 August 2025 14:36:49 +0000 (0:00:01.227) 0:08:01.605 ********* 2025-08-29 14:36:56.626984 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:56.626996 | orchestrator | 2025-08-29 14:36:56.627007 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:36:56.627017 | orchestrator | Friday 29 August 2025 14:36:50 +0000 (0:00:01.084) 0:08:02.689 ********* 2025-08-29 14:36:56.627028 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:56.627038 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:56.627049 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:56.627060 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:56.627070 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:56.627081 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:56.627091 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:56.627101 | orchestrator | 2025-08-29 14:36:56.627131 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:36:56.627142 | orchestrator | Friday 29 August 2025 14:36:51 +0000 (0:00:00.867) 0:08:03.557 ********* 2025-08-29 14:36:56.627153 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:56.627163 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:56.627174 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:56.627184 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:56.627195 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:56.627205 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:56.627216 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:56.627226 | orchestrator | 2025-08-29 14:36:56.627237 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 14:36:56.627248 | orchestrator | Friday 29 August 2025 14:36:53 +0000 (0:00:01.198) 0:08:04.756 ********* 2025-08-29 14:36:56.627258 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:56.627269 | orchestrator | 2025-08-29 14:36:56.627281 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:36:56.627291 | orchestrator | Friday 29 August 2025 14:36:54 +0000 (0:00:01.561) 0:08:06.317 ********* 2025-08-29 14:36:56.627309 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:56.627320 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:56.627331 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:56.627341 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:56.627352 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:56.627362 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:56.627372 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:56.627383 | orchestrator | 2025-08-29 14:36:56.627394 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:36:56.627405 | orchestrator | Friday 29 August 2025 14:36:55 +0000 (0:00:00.885) 0:08:07.202 ********* 2025-08-29 14:36:56.627415 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:56.627426 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:56.627436 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:56.627447 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:56.627457 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:56.627468 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:56.627478 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:56.627488 | orchestrator | 2025-08-29 14:36:56.627499 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:36:56.627511 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 14:36:56.627522 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 14:36:56.627533 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:56.627566 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:56.627577 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:56.627588 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:56.627599 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:56.627610 | orchestrator | 2025-08-29 14:36:56.627620 | orchestrator | 2025-08-29 14:36:56.627631 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:36:56.627642 | orchestrator | Friday 29 August 2025 14:36:56 +0000 (0:00:01.104) 0:08:08.307 ********* 2025-08-29 14:36:56.627653 | orchestrator | =============================================================================== 2025-08-29 14:36:56.627664 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.39s 2025-08-29 14:36:56.627674 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.94s 2025-08-29 14:36:56.627685 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.50s 2025-08-29 14:36:56.627695 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.33s 2025-08-29 14:36:56.627706 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.28s 2025-08-29 14:36:56.627717 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.77s 2025-08-29 14:36:56.627728 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.07s 2025-08-29 14:36:56.627738 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.09s 2025-08-29 14:36:56.627749 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.54s 2025-08-29 14:36:56.627775 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.46s 2025-08-29 14:36:56.627793 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.52s 2025-08-29 14:36:56.627819 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.47s 2025-08-29 14:36:56.627842 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.25s 2025-08-29 14:36:56.627857 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.08s 2025-08-29 14:36:56.627884 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.97s 2025-08-29 14:36:57.191776 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.78s 2025-08-29 14:36:57.191883 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.59s 2025-08-29 14:36:57.191897 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.19s 2025-08-29 14:36:57.191909 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.00s 2025-08-29 14:36:57.191920 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.74s 2025-08-29 14:36:57.528783 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 14:36:57.529632 | orchestrator | + osism apply network 2025-08-29 14:37:10.407117 | orchestrator | 2025-08-29 14:37:10 | INFO  | Task e70d5fb7-740f-4e51-86ff-e728bebc85cb (network) was prepared for execution. 2025-08-29 14:37:10.407227 | orchestrator | 2025-08-29 14:37:10 | INFO  | It takes a moment until task e70d5fb7-740f-4e51-86ff-e728bebc85cb (network) has been started and output is visible here. 2025-08-29 14:37:40.745992 | orchestrator | 2025-08-29 14:37:40.746127 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 14:37:40.746138 | orchestrator | 2025-08-29 14:37:40.746145 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 14:37:40.746152 | orchestrator | Friday 29 August 2025 14:37:15 +0000 (0:00:00.311) 0:00:00.311 ********* 2025-08-29 14:37:40.746158 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.746166 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.746172 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.746178 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.746184 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.746190 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.746196 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.746202 | orchestrator | 2025-08-29 14:37:40.746208 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 14:37:40.746214 | orchestrator | Friday 29 August 2025 14:37:16 +0000 (0:00:00.802) 0:00:01.113 ********* 2025-08-29 14:37:40.746222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:40.746230 | orchestrator | 2025-08-29 14:37:40.746236 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 14:37:40.746242 | orchestrator | Friday 29 August 2025 14:37:17 +0000 (0:00:01.250) 0:00:02.364 ********* 2025-08-29 14:37:40.746248 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.746254 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.746259 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.746265 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.746271 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.746277 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.746282 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.746288 | orchestrator | 2025-08-29 14:37:40.746294 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 14:37:40.746300 | orchestrator | Friday 29 August 2025 14:37:19 +0000 (0:00:02.043) 0:00:04.407 ********* 2025-08-29 14:37:40.746306 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.746312 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.746318 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.746345 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.746351 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.746357 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.746387 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.746393 | orchestrator | 2025-08-29 14:37:40.746399 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 14:37:40.746405 | orchestrator | Friday 29 August 2025 14:37:21 +0000 (0:00:01.767) 0:00:06.175 ********* 2025-08-29 14:37:40.746423 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 14:37:40.746430 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 14:37:40.746436 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 14:37:40.746442 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 14:37:40.746447 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 14:37:40.746453 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 14:37:40.746459 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 14:37:40.746464 | orchestrator | 2025-08-29 14:37:40.746469 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 14:37:40.746476 | orchestrator | Friday 29 August 2025 14:37:22 +0000 (0:00:00.967) 0:00:07.143 ********* 2025-08-29 14:37:40.746482 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:37:40.746489 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:37:40.746494 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:37:40.746500 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:37:40.746505 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:37:40.746510 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:37:40.746580 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:37:40.746589 | orchestrator | 2025-08-29 14:37:40.746599 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 14:37:40.746608 | orchestrator | Friday 29 August 2025 14:37:26 +0000 (0:00:03.602) 0:00:10.745 ********* 2025-08-29 14:37:40.746619 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:40.746628 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:40.746637 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:40.746643 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:40.746650 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:40.746658 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:40.746667 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:40.746676 | orchestrator | 2025-08-29 14:37:40.746684 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 14:37:40.746695 | orchestrator | Friday 29 August 2025 14:37:27 +0000 (0:00:01.474) 0:00:12.220 ********* 2025-08-29 14:37:40.746703 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:37:40.746712 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:37:40.746723 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:37:40.746732 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:37:40.746739 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:37:40.746746 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:37:40.746754 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:37:40.746759 | orchestrator | 2025-08-29 14:37:40.746765 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 14:37:40.746770 | orchestrator | Friday 29 August 2025 14:37:29 +0000 (0:00:01.992) 0:00:14.212 ********* 2025-08-29 14:37:40.746776 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.746782 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.746787 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.746792 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.746798 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.746803 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.746808 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.746814 | orchestrator | 2025-08-29 14:37:40.746819 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 14:37:40.746847 | orchestrator | Friday 29 August 2025 14:37:30 +0000 (0:00:01.131) 0:00:15.344 ********* 2025-08-29 14:37:40.746853 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:40.746858 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:40.746863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:40.746869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:40.746874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:40.746879 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:40.746884 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:40.746890 | orchestrator | 2025-08-29 14:37:40.746895 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 14:37:40.746901 | orchestrator | Friday 29 August 2025 14:37:31 +0000 (0:00:00.677) 0:00:16.021 ********* 2025-08-29 14:37:40.746906 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.746912 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.746917 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.746923 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.746928 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.746934 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.746939 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.746945 | orchestrator | 2025-08-29 14:37:40.746951 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 14:37:40.746956 | orchestrator | Friday 29 August 2025 14:37:33 +0000 (0:00:02.261) 0:00:18.282 ********* 2025-08-29 14:37:40.746962 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:40.746968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:40.746973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:40.746978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:40.746984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:40.746989 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:40.746995 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 14:37:40.747002 | orchestrator | 2025-08-29 14:37:40.747008 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 14:37:40.747014 | orchestrator | Friday 29 August 2025 14:37:34 +0000 (0:00:01.095) 0:00:19.378 ********* 2025-08-29 14:37:40.747019 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.747024 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:40.747030 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:40.747035 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:40.747041 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:40.747047 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:40.747053 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:40.747059 | orchestrator | 2025-08-29 14:37:40.747064 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 14:37:40.747069 | orchestrator | Friday 29 August 2025 14:37:36 +0000 (0:00:01.666) 0:00:21.044 ********* 2025-08-29 14:37:40.747080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:40.747088 | orchestrator | 2025-08-29 14:37:40.747094 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:37:40.747099 | orchestrator | Friday 29 August 2025 14:37:37 +0000 (0:00:01.320) 0:00:22.365 ********* 2025-08-29 14:37:40.747105 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.747110 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.747116 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.747121 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.747127 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.747132 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.747138 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.747143 | orchestrator | 2025-08-29 14:37:40.747154 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 14:37:40.747160 | orchestrator | Friday 29 August 2025 14:37:38 +0000 (0:00:00.996) 0:00:23.361 ********* 2025-08-29 14:37:40.747166 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:40.747172 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:40.747177 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:40.747183 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:40.747189 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:40.747195 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:40.747200 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:40.747206 | orchestrator | 2025-08-29 14:37:40.747211 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:37:40.747217 | orchestrator | Friday 29 August 2025 14:37:39 +0000 (0:00:00.837) 0:00:24.198 ********* 2025-08-29 14:37:40.747223 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747228 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747250 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747255 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747261 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747267 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747273 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747279 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747284 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747290 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:37:40.747296 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747301 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747307 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747313 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:37:40.747319 | orchestrator | 2025-08-29 14:37:40.747330 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 14:37:57.904248 | orchestrator | Friday 29 August 2025 14:37:40 +0000 (0:00:01.240) 0:00:25.439 ********* 2025-08-29 14:37:57.904354 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:57.904366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:57.904374 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:57.904382 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:57.904390 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:57.904397 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:57.904404 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:57.904411 | orchestrator | 2025-08-29 14:37:57.904420 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 14:37:57.904427 | orchestrator | Friday 29 August 2025 14:37:41 +0000 (0:00:00.738) 0:00:26.178 ********* 2025-08-29 14:37:57.904436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-2, testbed-node-0, testbed-node-3, testbed-node-1, testbed-node-4, testbed-node-5 2025-08-29 14:37:57.904448 | orchestrator | 2025-08-29 14:37:57.904454 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 14:37:57.904459 | orchestrator | Friday 29 August 2025 14:37:46 +0000 (0:00:05.287) 0:00:31.465 ********* 2025-08-29 14:37:57.904467 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904570 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904656 | orchestrator | 2025-08-29 14:37:57.904663 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 14:37:57.904677 | orchestrator | Friday 29 August 2025 14:37:52 +0000 (0:00:05.494) 0:00:36.960 ********* 2025-08-29 14:37:57.904685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904691 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904722 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:57.904749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:57.904782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:38:04.641567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:38:04.641711 | orchestrator | 2025-08-29 14:38:04.641729 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 14:38:04.641743 | orchestrator | Friday 29 August 2025 14:37:57 +0000 (0:00:05.631) 0:00:42.592 ********* 2025-08-29 14:38:04.641756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:04.641768 | orchestrator | 2025-08-29 14:38:04.641779 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:38:04.641790 | orchestrator | Friday 29 August 2025 14:37:59 +0000 (0:00:01.293) 0:00:43.885 ********* 2025-08-29 14:38:04.641802 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:04.641813 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:04.641824 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:04.641835 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:04.641846 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:04.641856 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:04.641867 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:04.641878 | orchestrator | 2025-08-29 14:38:04.641889 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:38:04.641900 | orchestrator | Friday 29 August 2025 14:38:00 +0000 (0:00:01.248) 0:00:45.134 ********* 2025-08-29 14:38:04.641911 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.641922 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.641933 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.641944 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.641955 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:04.641980 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.641991 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.642002 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.642070 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.642085 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:04.642098 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.642111 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.642124 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.642136 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.642148 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:04.642161 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.642174 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.642186 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.642199 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.642211 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:04.642224 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.642236 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.642249 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.642269 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.642281 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:04.642295 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.642307 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.642319 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.642331 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.642344 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:04.642357 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:38:04.642370 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:38:04.642382 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:38:04.642392 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:38:04.642403 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:04.642413 | orchestrator | 2025-08-29 14:38:04.642424 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 14:38:04.642453 | orchestrator | Friday 29 August 2025 14:38:02 +0000 (0:00:02.356) 0:00:47.490 ********* 2025-08-29 14:38:04.642465 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:04.642476 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:04.642486 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:04.642497 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:04.642567 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:04.642578 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:04.642589 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:04.642600 | orchestrator | 2025-08-29 14:38:04.642610 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 14:38:04.642621 | orchestrator | Friday 29 August 2025 14:38:03 +0000 (0:00:00.661) 0:00:48.152 ********* 2025-08-29 14:38:04.642631 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:04.642642 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:04.642653 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:04.642663 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:04.642674 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:04.642684 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:04.642694 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:04.642705 | orchestrator | 2025-08-29 14:38:04.642716 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:38:04.642729 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:38:04.642741 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:38:04.642752 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:38:04.642763 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:38:04.642774 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:38:04.642791 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:38:04.642802 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:38:04.642820 | orchestrator | 2025-08-29 14:38:04.642831 | orchestrator | 2025-08-29 14:38:04.642842 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:38:04.642853 | orchestrator | Friday 29 August 2025 14:38:04 +0000 (0:00:00.768) 0:00:48.920 ********* 2025-08-29 14:38:04.642864 | orchestrator | =============================================================================== 2025-08-29 14:38:04.642874 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.63s 2025-08-29 14:38:04.642885 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.49s 2025-08-29 14:38:04.642896 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.29s 2025-08-29 14:38:04.642906 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.60s 2025-08-29 14:38:04.642917 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.36s 2025-08-29 14:38:04.642928 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2025-08-29 14:38:04.642938 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.04s 2025-08-29 14:38:04.642949 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.99s 2025-08-29 14:38:04.642960 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2025-08-29 14:38:04.642971 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2025-08-29 14:38:04.642981 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.47s 2025-08-29 14:38:04.642992 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2025-08-29 14:38:04.643003 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2025-08-29 14:38:04.643013 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2025-08-29 14:38:04.643024 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2025-08-29 14:38:04.643035 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2025-08-29 14:38:04.643045 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-08-29 14:38:04.643056 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.10s 2025-08-29 14:38:04.643066 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-08-29 14:38:04.643077 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-08-29 14:38:04.958834 | orchestrator | + osism apply wireguard 2025-08-29 14:38:16.939051 | orchestrator | 2025-08-29 14:38:16 | INFO  | Task 75834c13-b62a-45a6-b59f-3bdcd282a923 (wireguard) was prepared for execution. 2025-08-29 14:38:16.939204 | orchestrator | 2025-08-29 14:38:16 | INFO  | It takes a moment until task 75834c13-b62a-45a6-b59f-3bdcd282a923 (wireguard) has been started and output is visible here. 2025-08-29 14:38:37.453205 | orchestrator | 2025-08-29 14:38:37.453304 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 14:38:37.453315 | orchestrator | 2025-08-29 14:38:37.453322 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 14:38:37.453328 | orchestrator | Friday 29 August 2025 14:38:21 +0000 (0:00:00.237) 0:00:00.237 ********* 2025-08-29 14:38:37.453334 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:37.453341 | orchestrator | 2025-08-29 14:38:37.453348 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 14:38:37.453354 | orchestrator | Friday 29 August 2025 14:38:22 +0000 (0:00:01.752) 0:00:01.989 ********* 2025-08-29 14:38:37.453360 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453366 | orchestrator | 2025-08-29 14:38:37.453373 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 14:38:37.453379 | orchestrator | Friday 29 August 2025 14:38:29 +0000 (0:00:06.860) 0:00:08.850 ********* 2025-08-29 14:38:37.453407 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453413 | orchestrator | 2025-08-29 14:38:37.453419 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 14:38:37.453425 | orchestrator | Friday 29 August 2025 14:38:30 +0000 (0:00:00.563) 0:00:09.414 ********* 2025-08-29 14:38:37.453431 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453437 | orchestrator | 2025-08-29 14:38:37.453443 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 14:38:37.453449 | orchestrator | Friday 29 August 2025 14:38:30 +0000 (0:00:00.459) 0:00:09.873 ********* 2025-08-29 14:38:37.453455 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:37.453461 | orchestrator | 2025-08-29 14:38:37.453467 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 14:38:37.453472 | orchestrator | Friday 29 August 2025 14:38:31 +0000 (0:00:00.501) 0:00:10.375 ********* 2025-08-29 14:38:37.453530 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:37.453537 | orchestrator | 2025-08-29 14:38:37.453543 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 14:38:37.453549 | orchestrator | Friday 29 August 2025 14:38:31 +0000 (0:00:00.575) 0:00:10.951 ********* 2025-08-29 14:38:37.453555 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:37.453561 | orchestrator | 2025-08-29 14:38:37.453567 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 14:38:37.453584 | orchestrator | Friday 29 August 2025 14:38:32 +0000 (0:00:00.427) 0:00:11.378 ********* 2025-08-29 14:38:37.453590 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453596 | orchestrator | 2025-08-29 14:38:37.453602 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 14:38:37.453608 | orchestrator | Friday 29 August 2025 14:38:33 +0000 (0:00:01.258) 0:00:12.637 ********* 2025-08-29 14:38:37.453614 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:38:37.453619 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453625 | orchestrator | 2025-08-29 14:38:37.453631 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 14:38:37.453637 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.959) 0:00:13.596 ********* 2025-08-29 14:38:37.453642 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453648 | orchestrator | 2025-08-29 14:38:37.453654 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 14:38:37.453659 | orchestrator | Friday 29 August 2025 14:38:36 +0000 (0:00:01.765) 0:00:15.361 ********* 2025-08-29 14:38:37.453665 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:37.453670 | orchestrator | 2025-08-29 14:38:37.453677 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:38:37.453683 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:38:37.453689 | orchestrator | 2025-08-29 14:38:37.453695 | orchestrator | 2025-08-29 14:38:37.453701 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:38:37.453707 | orchestrator | Friday 29 August 2025 14:38:37 +0000 (0:00:00.975) 0:00:16.337 ********* 2025-08-29 14:38:37.453713 | orchestrator | =============================================================================== 2025-08-29 14:38:37.453718 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.86s 2025-08-29 14:38:37.453724 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2025-08-29 14:38:37.453730 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.75s 2025-08-29 14:38:37.453736 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2025-08-29 14:38:37.453742 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-08-29 14:38:37.453748 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2025-08-29 14:38:37.453753 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.58s 2025-08-29 14:38:37.453768 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-08-29 14:38:37.453774 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-08-29 14:38:37.453781 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-08-29 14:38:37.453787 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-08-29 14:38:37.733418 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 14:38:37.775136 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 14:38:37.775230 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 14:38:37.849742 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 186 0 --:--:-- --:--:-- --:--:-- 189 2025-08-29 14:38:37.863890 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 14:38:39.756045 | orchestrator | 2025-08-29 14:38:39 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 14:38:49.973062 | orchestrator | 2025-08-29 14:38:49 | INFO  | Task 3ee89f0d-b8b9-471d-88f9-f8698f238d94 (workarounds) was prepared for execution. 2025-08-29 14:38:49.973188 | orchestrator | 2025-08-29 14:38:49 | INFO  | It takes a moment until task 3ee89f0d-b8b9-471d-88f9-f8698f238d94 (workarounds) has been started and output is visible here. 2025-08-29 14:39:15.520409 | orchestrator | 2025-08-29 14:39:15.520551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:39:15.520571 | orchestrator | 2025-08-29 14:39:15.520583 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 14:39:15.520596 | orchestrator | Friday 29 August 2025 14:38:54 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-08-29 14:39:15.520608 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520619 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520630 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520642 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520653 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520664 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520675 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 14:39:15.520686 | orchestrator | 2025-08-29 14:39:15.520698 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 14:39:15.520709 | orchestrator | 2025-08-29 14:39:15.520720 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:39:15.520731 | orchestrator | Friday 29 August 2025 14:38:54 +0000 (0:00:00.796) 0:00:00.956 ********* 2025-08-29 14:39:15.520743 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:15.520755 | orchestrator | 2025-08-29 14:39:15.520767 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 14:39:15.520778 | orchestrator | 2025-08-29 14:39:15.520799 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:39:15.520810 | orchestrator | Friday 29 August 2025 14:38:57 +0000 (0:00:02.487) 0:00:03.444 ********* 2025-08-29 14:39:15.520822 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:15.520833 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:15.520845 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:15.520856 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:15.520867 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:15.520879 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:15.520890 | orchestrator | 2025-08-29 14:39:15.520901 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 14:39:15.520912 | orchestrator | 2025-08-29 14:39:15.520941 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 14:39:15.520954 | orchestrator | Friday 29 August 2025 14:38:59 +0000 (0:00:01.937) 0:00:05.381 ********* 2025-08-29 14:39:15.520967 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:39:15.520981 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:39:15.520994 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:39:15.521007 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:39:15.521019 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:39:15.521032 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:39:15.521044 | orchestrator | 2025-08-29 14:39:15.521058 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 14:39:15.521070 | orchestrator | Friday 29 August 2025 14:39:00 +0000 (0:00:01.549) 0:00:06.931 ********* 2025-08-29 14:39:15.521083 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:15.521095 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:15.521108 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:15.521121 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:15.521133 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:15.521146 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:15.521159 | orchestrator | 2025-08-29 14:39:15.521172 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 14:39:15.521185 | orchestrator | Friday 29 August 2025 14:39:04 +0000 (0:00:03.831) 0:00:10.763 ********* 2025-08-29 14:39:15.521198 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:15.521210 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:15.521223 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:15.521236 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:15.521247 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:15.521258 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:15.521269 | orchestrator | 2025-08-29 14:39:15.521281 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 14:39:15.521293 | orchestrator | 2025-08-29 14:39:15.521304 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 14:39:15.521315 | orchestrator | Friday 29 August 2025 14:39:05 +0000 (0:00:00.695) 0:00:11.458 ********* 2025-08-29 14:39:15.521326 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:15.521337 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:15.521348 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:15.521359 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:15.521370 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:15.521381 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:15.521392 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:15.521403 | orchestrator | 2025-08-29 14:39:15.521414 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 14:39:15.521426 | orchestrator | Friday 29 August 2025 14:39:07 +0000 (0:00:01.609) 0:00:13.068 ********* 2025-08-29 14:39:15.521437 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:15.521475 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:15.521485 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:15.521507 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:15.521518 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:15.521528 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:15.521556 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:15.521567 | orchestrator | 2025-08-29 14:39:15.521578 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 14:39:15.521589 | orchestrator | Friday 29 August 2025 14:39:08 +0000 (0:00:01.629) 0:00:14.698 ********* 2025-08-29 14:39:15.521607 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:15.521618 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:15.521628 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:15.521639 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:15.521650 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:15.521660 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:15.521671 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:15.521681 | orchestrator | 2025-08-29 14:39:15.521692 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 14:39:15.521703 | orchestrator | Friday 29 August 2025 14:39:10 +0000 (0:00:01.540) 0:00:16.238 ********* 2025-08-29 14:39:15.521714 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:15.521724 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:15.521735 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:15.521746 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:15.521756 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:15.521767 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:15.521777 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:15.521788 | orchestrator | 2025-08-29 14:39:15.521798 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 14:39:15.521809 | orchestrator | Friday 29 August 2025 14:39:11 +0000 (0:00:01.750) 0:00:17.989 ********* 2025-08-29 14:39:15.521820 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:15.521830 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:15.521841 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:15.521851 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:15.521866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:15.521878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:15.521888 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:15.521899 | orchestrator | 2025-08-29 14:39:15.521909 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 14:39:15.521920 | orchestrator | 2025-08-29 14:39:15.521931 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 14:39:15.521941 | orchestrator | Friday 29 August 2025 14:39:12 +0000 (0:00:00.667) 0:00:18.656 ********* 2025-08-29 14:39:15.521952 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:15.521963 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:15.521974 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:15.521984 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:15.521995 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:15.522005 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:15.522075 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:15.522089 | orchestrator | 2025-08-29 14:39:15.522100 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:39:15.522113 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:39:15.522125 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:15.522136 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:15.522147 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:15.522157 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:15.522168 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:15.522179 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:15.522197 | orchestrator | 2025-08-29 14:39:15.522207 | orchestrator | 2025-08-29 14:39:15.522218 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:39:15.522229 | orchestrator | Friday 29 August 2025 14:39:15 +0000 (0:00:02.839) 0:00:21.496 ********* 2025-08-29 14:39:15.522239 | orchestrator | =============================================================================== 2025-08-29 14:39:15.522250 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2025-08-29 14:39:15.522261 | orchestrator | Install python3-docker -------------------------------------------------- 2.84s 2025-08-29 14:39:15.522271 | orchestrator | Apply netplan configuration --------------------------------------------- 2.49s 2025-08-29 14:39:15.522282 | orchestrator | Apply netplan configuration --------------------------------------------- 1.94s 2025-08-29 14:39:15.522292 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.75s 2025-08-29 14:39:15.522303 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-08-29 14:39:15.522313 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-08-29 14:39:15.522324 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2025-08-29 14:39:15.522335 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2025-08-29 14:39:15.522345 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-08-29 14:39:15.522356 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-08-29 14:39:15.522374 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.67s 2025-08-29 14:39:16.199945 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:39:28.126299 | orchestrator | 2025-08-29 14:39:28 | INFO  | Task d4d00ce6-b64e-4224-ab11-3981d52c6aa6 (reboot) was prepared for execution. 2025-08-29 14:39:28.126409 | orchestrator | 2025-08-29 14:39:28 | INFO  | It takes a moment until task d4d00ce6-b64e-4224-ab11-3981d52c6aa6 (reboot) has been started and output is visible here. 2025-08-29 14:39:38.240690 | orchestrator | 2025-08-29 14:39:38.240803 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:39:38.240818 | orchestrator | 2025-08-29 14:39:38.240828 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:39:38.240837 | orchestrator | Friday 29 August 2025 14:39:32 +0000 (0:00:00.216) 0:00:00.216 ********* 2025-08-29 14:39:38.240846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:38.240856 | orchestrator | 2025-08-29 14:39:38.240864 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:39:38.240873 | orchestrator | Friday 29 August 2025 14:39:32 +0000 (0:00:00.092) 0:00:00.309 ********* 2025-08-29 14:39:38.240882 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:38.240891 | orchestrator | 2025-08-29 14:39:38.240899 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:39:38.240908 | orchestrator | Friday 29 August 2025 14:39:33 +0000 (0:00:00.915) 0:00:01.224 ********* 2025-08-29 14:39:38.240917 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:38.240925 | orchestrator | 2025-08-29 14:39:38.240935 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:39:38.240943 | orchestrator | 2025-08-29 14:39:38.240952 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:39:38.240961 | orchestrator | Friday 29 August 2025 14:39:33 +0000 (0:00:00.111) 0:00:01.336 ********* 2025-08-29 14:39:38.240969 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:38.240978 | orchestrator | 2025-08-29 14:39:38.240987 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:39:38.240996 | orchestrator | Friday 29 August 2025 14:39:33 +0000 (0:00:00.105) 0:00:01.441 ********* 2025-08-29 14:39:38.241027 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:38.241037 | orchestrator | 2025-08-29 14:39:38.241045 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:39:38.241054 | orchestrator | Friday 29 August 2025 14:39:34 +0000 (0:00:00.660) 0:00:02.102 ********* 2025-08-29 14:39:38.241062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:38.241071 | orchestrator | 2025-08-29 14:39:38.241079 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:39:38.241088 | orchestrator | 2025-08-29 14:39:38.241096 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:39:38.241105 | orchestrator | Friday 29 August 2025 14:39:34 +0000 (0:00:00.122) 0:00:02.224 ********* 2025-08-29 14:39:38.241113 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:38.241122 | orchestrator | 2025-08-29 14:39:38.241130 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:39:38.241139 | orchestrator | Friday 29 August 2025 14:39:34 +0000 (0:00:00.209) 0:00:02.433 ********* 2025-08-29 14:39:38.241147 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:38.241156 | orchestrator | 2025-08-29 14:39:38.241165 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:39:38.241173 | orchestrator | Friday 29 August 2025 14:39:35 +0000 (0:00:00.666) 0:00:03.100 ********* 2025-08-29 14:39:38.241182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:38.241190 | orchestrator | 2025-08-29 14:39:38.241203 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:39:38.241212 | orchestrator | 2025-08-29 14:39:38.241221 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:39:38.241230 | orchestrator | Friday 29 August 2025 14:39:35 +0000 (0:00:00.134) 0:00:03.235 ********* 2025-08-29 14:39:38.241238 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:38.241247 | orchestrator | 2025-08-29 14:39:38.241255 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:39:38.241264 | orchestrator | Friday 29 August 2025 14:39:35 +0000 (0:00:00.151) 0:00:03.387 ********* 2025-08-29 14:39:38.241272 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:38.241281 | orchestrator | 2025-08-29 14:39:38.241290 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:39:38.241298 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.684) 0:00:04.071 ********* 2025-08-29 14:39:38.241307 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:38.241315 | orchestrator | 2025-08-29 14:39:38.241324 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:39:38.241332 | orchestrator | 2025-08-29 14:39:38.241341 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:39:38.241349 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.105) 0:00:04.176 ********* 2025-08-29 14:39:38.241358 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:38.241366 | orchestrator | 2025-08-29 14:39:38.241375 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:39:38.241383 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.102) 0:00:04.279 ********* 2025-08-29 14:39:38.241392 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:38.241400 | orchestrator | 2025-08-29 14:39:38.241409 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:39:38.241439 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.699) 0:00:04.979 ********* 2025-08-29 14:39:38.241448 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:38.241457 | orchestrator | 2025-08-29 14:39:38.241465 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:39:38.241474 | orchestrator | 2025-08-29 14:39:38.241482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:39:38.241491 | orchestrator | Friday 29 August 2025 14:39:37 +0000 (0:00:00.119) 0:00:05.099 ********* 2025-08-29 14:39:38.241499 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:38.241515 | orchestrator | 2025-08-29 14:39:38.241524 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:39:38.241533 | orchestrator | Friday 29 August 2025 14:39:37 +0000 (0:00:00.115) 0:00:05.214 ********* 2025-08-29 14:39:38.241541 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:38.241550 | orchestrator | 2025-08-29 14:39:38.241558 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:39:38.241567 | orchestrator | Friday 29 August 2025 14:39:37 +0000 (0:00:00.657) 0:00:05.872 ********* 2025-08-29 14:39:38.241590 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:38.241599 | orchestrator | 2025-08-29 14:39:38.241608 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:39:38.241618 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:38.241646 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:38.241655 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:38.241668 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:38.241677 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:38.241686 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:39:38.241694 | orchestrator | 2025-08-29 14:39:38.241703 | orchestrator | 2025-08-29 14:39:38.241712 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:39:38.241721 | orchestrator | Friday 29 August 2025 14:39:37 +0000 (0:00:00.038) 0:00:05.910 ********* 2025-08-29 14:39:38.241729 | orchestrator | =============================================================================== 2025-08-29 14:39:38.241738 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-08-29 14:39:38.241746 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2025-08-29 14:39:38.241755 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-08-29 14:39:38.557020 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:39:50.571934 | orchestrator | 2025-08-29 14:39:50 | INFO  | Task 39a87bb7-62a6-45b8-8569-52603c7d46e1 (wait-for-connection) was prepared for execution. 2025-08-29 14:39:50.572050 | orchestrator | 2025-08-29 14:39:50 | INFO  | It takes a moment until task 39a87bb7-62a6-45b8-8569-52603c7d46e1 (wait-for-connection) has been started and output is visible here. 2025-08-29 14:40:06.589103 | orchestrator | 2025-08-29 14:40:06.589250 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 14:40:06.589268 | orchestrator | 2025-08-29 14:40:06.589279 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 14:40:06.589289 | orchestrator | Friday 29 August 2025 14:39:54 +0000 (0:00:00.245) 0:00:00.245 ********* 2025-08-29 14:40:06.589299 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:06.589310 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:06.589320 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:06.589329 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:06.589338 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:06.589348 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:06.589357 | orchestrator | 2025-08-29 14:40:06.589424 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:40:06.589438 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:06.589475 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:06.589486 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:06.589495 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:06.589505 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:06.589514 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:06.589524 | orchestrator | 2025-08-29 14:40:06.589533 | orchestrator | 2025-08-29 14:40:06.589543 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:40:06.589552 | orchestrator | Friday 29 August 2025 14:40:06 +0000 (0:00:11.558) 0:00:11.803 ********* 2025-08-29 14:40:06.589562 | orchestrator | =============================================================================== 2025-08-29 14:40:06.589571 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2025-08-29 14:40:06.866808 | orchestrator | + osism apply hddtemp 2025-08-29 14:40:18.606532 | orchestrator | 2025-08-29 14:40:18 | INFO  | Task cc950188-1c99-487b-8186-96db00d92223 (hddtemp) was prepared for execution. 2025-08-29 14:40:18.606625 | orchestrator | 2025-08-29 14:40:18 | INFO  | It takes a moment until task cc950188-1c99-487b-8186-96db00d92223 (hddtemp) has been started and output is visible here. 2025-08-29 14:40:45.889852 | orchestrator | 2025-08-29 14:40:45.889964 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 14:40:45.889981 | orchestrator | 2025-08-29 14:40:45.889993 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 14:40:45.890005 | orchestrator | Friday 29 August 2025 14:40:22 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-08-29 14:40:45.890080 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:45.890096 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:45.890108 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:45.890119 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:45.890130 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:45.890141 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:45.890152 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:45.890163 | orchestrator | 2025-08-29 14:40:45.890174 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 14:40:45.890185 | orchestrator | Friday 29 August 2025 14:40:23 +0000 (0:00:00.557) 0:00:00.793 ********* 2025-08-29 14:40:45.890215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:45.890230 | orchestrator | 2025-08-29 14:40:45.890241 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 14:40:45.890252 | orchestrator | Friday 29 August 2025 14:40:24 +0000 (0:00:01.033) 0:00:01.827 ********* 2025-08-29 14:40:45.890264 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:45.890284 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:45.890381 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:45.890404 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:45.890423 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:45.890440 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:45.890452 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:45.890465 | orchestrator | 2025-08-29 14:40:45.890478 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 14:40:45.890490 | orchestrator | Friday 29 August 2025 14:40:26 +0000 (0:00:01.935) 0:00:03.762 ********* 2025-08-29 14:40:45.890528 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:45.890542 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:45.890554 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:45.890566 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:45.890578 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:45.890590 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:45.890602 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:45.890614 | orchestrator | 2025-08-29 14:40:45.890627 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 14:40:45.890639 | orchestrator | Friday 29 August 2025 14:40:27 +0000 (0:00:01.036) 0:00:04.799 ********* 2025-08-29 14:40:45.890651 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:45.890663 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:45.890675 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:45.890687 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:45.890699 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:45.890710 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:45.890722 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:45.890734 | orchestrator | 2025-08-29 14:40:45.890746 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 14:40:45.890759 | orchestrator | Friday 29 August 2025 14:40:28 +0000 (0:00:01.099) 0:00:05.899 ********* 2025-08-29 14:40:45.890771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:45.890784 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:45.890796 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:45.890806 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:45.890817 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:45.890827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:45.890838 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:45.890849 | orchestrator | 2025-08-29 14:40:45.890859 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 14:40:45.890870 | orchestrator | Friday 29 August 2025 14:40:28 +0000 (0:00:00.695) 0:00:06.595 ********* 2025-08-29 14:40:45.890881 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:45.890891 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:45.890902 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:45.890912 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:45.890923 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:45.890933 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:45.890944 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:45.890954 | orchestrator | 2025-08-29 14:40:45.890965 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 14:40:45.890975 | orchestrator | Friday 29 August 2025 14:40:42 +0000 (0:00:13.241) 0:00:19.836 ********* 2025-08-29 14:40:45.890987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:45.890998 | orchestrator | 2025-08-29 14:40:45.891009 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 14:40:45.891019 | orchestrator | Friday 29 August 2025 14:40:43 +0000 (0:00:01.351) 0:00:21.188 ********* 2025-08-29 14:40:45.891030 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:45.891040 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:45.891051 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:45.891062 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:45.891073 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:45.891083 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:45.891094 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:45.891104 | orchestrator | 2025-08-29 14:40:45.891115 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:40:45.891126 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:45.891167 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:45.891179 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:45.891190 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:45.891201 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:45.891212 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:45.891229 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:45.891240 | orchestrator | 2025-08-29 14:40:45.891251 | orchestrator | 2025-08-29 14:40:45.891262 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:40:45.891279 | orchestrator | Friday 29 August 2025 14:40:45 +0000 (0:00:01.953) 0:00:23.141 ********* 2025-08-29 14:40:45.891321 | orchestrator | =============================================================================== 2025-08-29 14:40:45.891339 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.24s 2025-08-29 14:40:45.891355 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2025-08-29 14:40:45.891373 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-08-29 14:40:45.891390 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2025-08-29 14:40:45.891408 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2025-08-29 14:40:45.891427 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.04s 2025-08-29 14:40:45.891445 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.03s 2025-08-29 14:40:45.891461 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.70s 2025-08-29 14:40:45.891472 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.56s 2025-08-29 14:40:46.181501 | orchestrator | ++ semver 9.2.0 7.1.1 2025-08-29 14:40:46.236661 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:40:46.236726 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 14:40:59.264835 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:40:59.264928 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:40:59.264944 | orchestrator | + local max_attempts=60 2025-08-29 14:40:59.264956 | orchestrator | + local name=ceph-ansible 2025-08-29 14:40:59.264967 | orchestrator | + local attempt_num=1 2025-08-29 14:40:59.264978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:59.295596 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:59.295639 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:59.295651 | orchestrator | + sleep 5 2025-08-29 14:41:04.300768 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:04.332676 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:04.332781 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:04.332797 | orchestrator | + sleep 5 2025-08-29 14:41:09.336199 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:09.371949 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:09.372028 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:09.372041 | orchestrator | + sleep 5 2025-08-29 14:41:14.377216 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:14.413223 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:14.413435 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:14.413467 | orchestrator | + sleep 5 2025-08-29 14:41:19.416810 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:19.454776 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:19.455100 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:19.455124 | orchestrator | + sleep 5 2025-08-29 14:41:24.459956 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:24.494976 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:24.495466 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:24.495547 | orchestrator | + sleep 5 2025-08-29 14:41:29.499007 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:29.539181 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:29.539279 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:29.539292 | orchestrator | + sleep 5 2025-08-29 14:41:34.546269 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:34.583355 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:34.583443 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:34.583457 | orchestrator | + sleep 5 2025-08-29 14:41:39.587262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:39.614504 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:39.614592 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:39.614608 | orchestrator | + sleep 5 2025-08-29 14:41:44.617437 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:44.649273 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:44.649358 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:44.649373 | orchestrator | + sleep 5 2025-08-29 14:41:49.652844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:49.685469 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:49.685586 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:49.685594 | orchestrator | + sleep 5 2025-08-29 14:41:54.689661 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:54.730693 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:54.730813 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:54.730826 | orchestrator | + sleep 5 2025-08-29 14:41:59.735545 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:59.777026 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:59.777079 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:59.777089 | orchestrator | + sleep 5 2025-08-29 14:42:04.782214 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:42:04.806834 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:42:04.806902 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:42:04.807017 | orchestrator | + local max_attempts=60 2025-08-29 14:42:04.807036 | orchestrator | + local name=kolla-ansible 2025-08-29 14:42:04.807048 | orchestrator | + local attempt_num=1 2025-08-29 14:42:04.807069 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:42:04.835586 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:42:04.835640 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:42:04.835653 | orchestrator | + local max_attempts=60 2025-08-29 14:42:04.835664 | orchestrator | + local name=osism-ansible 2025-08-29 14:42:04.835675 | orchestrator | + local attempt_num=1 2025-08-29 14:42:04.836493 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:42:04.867258 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:42:04.867316 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:42:04.867329 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:42:05.011368 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 14:42:05.151482 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 14:42:05.305552 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 14:42:05.466865 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 14:42:05.467469 | orchestrator | + osism apply gather-facts 2025-08-29 14:42:17.474858 | orchestrator | 2025-08-29 14:42:17 | INFO  | Task ebcd7ad4-a7a6-49d5-aacd-e4131d5a4353 (gather-facts) was prepared for execution. 2025-08-29 14:42:17.475661 | orchestrator | 2025-08-29 14:42:17 | INFO  | It takes a moment until task ebcd7ad4-a7a6-49d5-aacd-e4131d5a4353 (gather-facts) has been started and output is visible here. 2025-08-29 14:42:29.984844 | orchestrator | 2025-08-29 14:42:29.984904 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:42:29.984911 | orchestrator | 2025-08-29 14:42:29.984916 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:42:29.984921 | orchestrator | Friday 29 August 2025 14:42:21 +0000 (0:00:00.219) 0:00:00.219 ********* 2025-08-29 14:42:29.984926 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:29.984931 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:29.984936 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:29.984940 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:29.984945 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:29.984949 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:29.984955 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:29.984959 | orchestrator | 2025-08-29 14:42:29.984964 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:42:29.984969 | orchestrator | 2025-08-29 14:42:29.984973 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:42:29.984978 | orchestrator | Friday 29 August 2025 14:42:29 +0000 (0:00:08.269) 0:00:08.489 ********* 2025-08-29 14:42:29.984983 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:42:29.984988 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:29.984993 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:29.984997 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:29.985002 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:29.985006 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:29.985011 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:29.985015 | orchestrator | 2025-08-29 14:42:29.985020 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:29.985025 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985030 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985035 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985039 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985044 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985049 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985053 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:29.985058 | orchestrator | 2025-08-29 14:42:29.985063 | orchestrator | 2025-08-29 14:42:29.985067 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:29.985072 | orchestrator | Friday 29 August 2025 14:42:29 +0000 (0:00:00.444) 0:00:08.934 ********* 2025-08-29 14:42:29.985076 | orchestrator | =============================================================================== 2025-08-29 14:42:29.985081 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.27s 2025-08-29 14:42:29.985085 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-08-29 14:42:30.156120 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 14:42:30.165769 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 14:42:30.177164 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 14:42:30.184797 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 14:42:30.192737 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 14:42:30.212676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 14:42:30.223568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 14:42:30.236265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 14:42:30.247615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 14:42:30.259004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 14:42:30.269541 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 14:42:30.284878 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 14:42:30.295303 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 14:42:30.309519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 14:42:30.321026 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 14:42:30.338481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 14:42:30.350229 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 14:42:30.363335 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 14:42:30.374603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 14:42:30.385518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 14:42:30.396997 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 14:42:30.519571 | orchestrator | ok: Runtime: 0:22:52.717645 2025-08-29 14:42:30.625552 | 2025-08-29 14:42:30.625722 | TASK [Deploy services] 2025-08-29 14:42:31.158982 | orchestrator | skipping: Conditional result was False 2025-08-29 14:42:31.177023 | 2025-08-29 14:42:31.177174 | TASK [Deploy in a nutshell] 2025-08-29 14:42:31.864703 | orchestrator | + set -e 2025-08-29 14:42:31.864945 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:42:31.864975 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:42:31.864995 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:42:31.865007 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:42:31.865018 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:42:31.865042 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:42:31.865084 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:42:31.865108 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:42:31.865120 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:42:31.865165 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:42:31.865177 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:42:31.865193 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:42:31.865202 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 14:42:31.865220 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 14:42:31.865230 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:42:31.865242 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:42:31.865252 | orchestrator | ++ export ARA=false 2025-08-29 14:42:31.865262 | orchestrator | ++ ARA=false 2025-08-29 14:42:31.865272 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:42:31.865282 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:42:31.865292 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:42:31.865302 | orchestrator | ++ TEMPEST=false 2025-08-29 14:42:31.865311 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:42:31.865321 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:42:31.865330 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:42:31.865340 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 14:42:31.865350 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:42:31.865360 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:42:31.865369 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:42:31.865379 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:42:31.865388 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:42:31.865398 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:42:31.865408 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:42:31.865418 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:42:31.865428 | orchestrator | + echo 2025-08-29 14:42:31.865437 | orchestrator | 2025-08-29 14:42:31.865447 | orchestrator | # PULL IMAGES 2025-08-29 14:42:31.865457 | orchestrator | 2025-08-29 14:42:31.865467 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 14:42:31.865476 | orchestrator | + echo 2025-08-29 14:42:31.866545 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 14:42:31.917394 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:42:31.917501 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 14:42:33.239658 | orchestrator | 2025-08-29 14:42:33 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 14:42:43.321745 | orchestrator | 2025-08-29 14:42:43 | INFO  | Task d8a49bee-8600-461f-8c4e-f9bd2289a3ff (pull-images) was prepared for execution. 2025-08-29 14:42:43.321872 | orchestrator | 2025-08-29 14:42:43 | INFO  | Task d8a49bee-8600-461f-8c4e-f9bd2289a3ff is running in background. No more output. Check ARA for logs. 2025-08-29 14:42:45.563092 | orchestrator | 2025-08-29 14:42:45 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 14:42:55.669180 | orchestrator | 2025-08-29 14:42:55 | INFO  | Task 421c551d-b010-4584-899c-6dc49371a0a2 (wipe-partitions) was prepared for execution. 2025-08-29 14:42:55.669261 | orchestrator | 2025-08-29 14:42:55 | INFO  | It takes a moment until task 421c551d-b010-4584-899c-6dc49371a0a2 (wipe-partitions) has been started and output is visible here. 2025-08-29 14:43:09.517619 | orchestrator | 2025-08-29 14:43:09.517733 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 14:43:09.517750 | orchestrator | 2025-08-29 14:43:09.517762 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 14:43:09.517779 | orchestrator | Friday 29 August 2025 14:43:01 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-08-29 14:43:09.517793 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:43:09.517805 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:43:09.517817 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:43:09.517828 | orchestrator | 2025-08-29 14:43:09.517839 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 14:43:09.517879 | orchestrator | Friday 29 August 2025 14:43:01 +0000 (0:00:00.573) 0:00:00.746 ********* 2025-08-29 14:43:09.517891 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:09.517902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:09.517913 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:09.517929 | orchestrator | 2025-08-29 14:43:09.517941 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 14:43:09.517952 | orchestrator | Friday 29 August 2025 14:43:02 +0000 (0:00:00.241) 0:00:00.987 ********* 2025-08-29 14:43:09.517963 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:09.517975 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:09.517985 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:09.517996 | orchestrator | 2025-08-29 14:43:09.518008 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 14:43:09.518087 | orchestrator | Friday 29 August 2025 14:43:02 +0000 (0:00:00.712) 0:00:01.699 ********* 2025-08-29 14:43:09.518100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:09.518147 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:09.518160 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:09.518172 | orchestrator | 2025-08-29 14:43:09.518185 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 14:43:09.518197 | orchestrator | Friday 29 August 2025 14:43:03 +0000 (0:00:00.271) 0:00:01.970 ********* 2025-08-29 14:43:09.518210 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:43:09.518227 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:43:09.518240 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:43:09.518252 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:43:09.518265 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:43:09.518278 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:43:09.518291 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:43:09.518303 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:43:09.518316 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:43:09.518329 | orchestrator | 2025-08-29 14:43:09.518341 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 14:43:09.518354 | orchestrator | Friday 29 August 2025 14:43:04 +0000 (0:00:01.187) 0:00:03.158 ********* 2025-08-29 14:43:09.518367 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:43:09.518380 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:43:09.518392 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:43:09.518405 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:43:09.518417 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:43:09.518429 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:43:09.518442 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:43:09.518454 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:43:09.518467 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:43:09.518479 | orchestrator | 2025-08-29 14:43:09.518491 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 14:43:09.518502 | orchestrator | Friday 29 August 2025 14:43:05 +0000 (0:00:01.353) 0:00:04.511 ********* 2025-08-29 14:43:09.518513 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:43:09.518524 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:43:09.518535 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:43:09.518546 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:43:09.518556 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:43:09.518574 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:43:09.518585 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:43:09.518596 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:43:09.518617 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:43:09.518628 | orchestrator | 2025-08-29 14:43:09.518639 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 14:43:09.518650 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:02.266) 0:00:06.778 ********* 2025-08-29 14:43:09.518661 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:43:09.518672 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:43:09.518683 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:43:09.518693 | orchestrator | 2025-08-29 14:43:09.518704 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 14:43:09.518715 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.611) 0:00:07.390 ********* 2025-08-29 14:43:09.518726 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:43:09.518737 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:43:09.518748 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:43:09.518758 | orchestrator | 2025-08-29 14:43:09.518769 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:43:09.518782 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:09.518795 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:09.518827 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:09.518839 | orchestrator | 2025-08-29 14:43:09.518850 | orchestrator | 2025-08-29 14:43:09.518861 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:43:09.518872 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.594) 0:00:07.985 ********* 2025-08-29 14:43:09.518883 | orchestrator | =============================================================================== 2025-08-29 14:43:09.518894 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.27s 2025-08-29 14:43:09.518904 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.35s 2025-08-29 14:43:09.518918 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-08-29 14:43:09.518936 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2025-08-29 14:43:09.518954 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-08-29 14:43:09.518972 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-08-29 14:43:09.518990 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-08-29 14:43:09.519008 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-08-29 14:43:09.519025 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-08-29 14:43:21.668481 | orchestrator | 2025-08-29 14:43:21 | INFO  | Task 5a89a8d0-f635-4c2b-a7ab-55251470d984 (facts) was prepared for execution. 2025-08-29 14:43:21.668826 | orchestrator | 2025-08-29 14:43:21 | INFO  | It takes a moment until task 5a89a8d0-f635-4c2b-a7ab-55251470d984 (facts) has been started and output is visible here. 2025-08-29 14:43:33.879273 | orchestrator | 2025-08-29 14:43:33.879367 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:43:33.879378 | orchestrator | 2025-08-29 14:43:33.879386 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:43:33.879393 | orchestrator | Friday 29 August 2025 14:43:25 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-08-29 14:43:33.879401 | orchestrator | ok: [testbed-manager] 2025-08-29 14:43:33.879409 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:43:33.879415 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:43:33.879422 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:43:33.879447 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:33.879454 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:33.879461 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:33.879468 | orchestrator | 2025-08-29 14:43:33.879477 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:43:33.879484 | orchestrator | Friday 29 August 2025 14:43:26 +0000 (0:00:00.999) 0:00:01.251 ********* 2025-08-29 14:43:33.879491 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:43:33.879499 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:43:33.879506 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:43:33.879512 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:43:33.879519 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:33.879526 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:33.879533 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:33.879539 | orchestrator | 2025-08-29 14:43:33.879546 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:43:33.879553 | orchestrator | 2025-08-29 14:43:33.879560 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:43:33.879567 | orchestrator | Friday 29 August 2025 14:43:27 +0000 (0:00:01.099) 0:00:02.351 ********* 2025-08-29 14:43:33.879573 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:43:33.879580 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:43:33.879587 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:43:33.879595 | orchestrator | ok: [testbed-manager] 2025-08-29 14:43:33.879601 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:33.879608 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:33.879615 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:33.879622 | orchestrator | 2025-08-29 14:43:33.879628 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:43:33.879635 | orchestrator | 2025-08-29 14:43:33.879642 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:43:33.879661 | orchestrator | Friday 29 August 2025 14:43:33 +0000 (0:00:05.660) 0:00:08.011 ********* 2025-08-29 14:43:33.879669 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:43:33.879675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:43:33.879682 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:43:33.879689 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:43:33.879695 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:33.879702 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:33.879708 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:33.879715 | orchestrator | 2025-08-29 14:43:33.879722 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:43:33.879730 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879739 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879746 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879753 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879759 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879766 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879773 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:43:33.879780 | orchestrator | 2025-08-29 14:43:33.879786 | orchestrator | 2025-08-29 14:43:33.879793 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:43:33.879805 | orchestrator | Friday 29 August 2025 14:43:33 +0000 (0:00:00.467) 0:00:08.479 ********* 2025-08-29 14:43:33.879812 | orchestrator | =============================================================================== 2025-08-29 14:43:33.879819 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.66s 2025-08-29 14:43:33.879826 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-08-29 14:43:33.879833 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-08-29 14:43:33.879840 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-08-29 14:43:36.089415 | orchestrator | 2025-08-29 14:43:36 | INFO  | Task 42409fda-25e7-4220-bae9-f4eae6b8d8ca (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 14:43:36.089528 | orchestrator | 2025-08-29 14:43:36 | INFO  | It takes a moment until task 42409fda-25e7-4220-bae9-f4eae6b8d8ca (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 14:43:48.314311 | orchestrator | 2025-08-29 14:43:48.314512 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:43:48.314531 | orchestrator | 2025-08-29 14:43:48.314543 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:43:48.314557 | orchestrator | Friday 29 August 2025 14:43:40 +0000 (0:00:00.328) 0:00:00.328 ********* 2025-08-29 14:43:48.314570 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:48.314581 | orchestrator | 2025-08-29 14:43:48.314592 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:43:48.314603 | orchestrator | Friday 29 August 2025 14:43:40 +0000 (0:00:00.288) 0:00:00.617 ********* 2025-08-29 14:43:48.314615 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:48.314627 | orchestrator | 2025-08-29 14:43:48.314638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.314648 | orchestrator | Friday 29 August 2025 14:43:41 +0000 (0:00:00.240) 0:00:00.858 ********* 2025-08-29 14:43:48.314660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:43:48.314671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:43:48.314682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:43:48.314693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:43:48.314704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:43:48.314715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:43:48.314726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:43:48.314737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:43:48.314748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:43:48.314759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:43:48.314770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:43:48.314804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:43:48.314816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:43:48.314827 | orchestrator | 2025-08-29 14:43:48.314838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.314849 | orchestrator | Friday 29 August 2025 14:43:41 +0000 (0:00:00.391) 0:00:01.250 ********* 2025-08-29 14:43:48.314860 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.314871 | orchestrator | 2025-08-29 14:43:48.314904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.314916 | orchestrator | Friday 29 August 2025 14:43:42 +0000 (0:00:00.499) 0:00:01.749 ********* 2025-08-29 14:43:48.314926 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.314937 | orchestrator | 2025-08-29 14:43:48.314948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.314959 | orchestrator | Friday 29 August 2025 14:43:42 +0000 (0:00:00.201) 0:00:01.950 ********* 2025-08-29 14:43:48.314970 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.314981 | orchestrator | 2025-08-29 14:43:48.314991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315002 | orchestrator | Friday 29 August 2025 14:43:42 +0000 (0:00:00.197) 0:00:02.148 ********* 2025-08-29 14:43:48.315013 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315024 | orchestrator | 2025-08-29 14:43:48.315039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315068 | orchestrator | Friday 29 August 2025 14:43:42 +0000 (0:00:00.203) 0:00:02.351 ********* 2025-08-29 14:43:48.315079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315101 | orchestrator | 2025-08-29 14:43:48.315112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315123 | orchestrator | Friday 29 August 2025 14:43:42 +0000 (0:00:00.196) 0:00:02.548 ********* 2025-08-29 14:43:48.315134 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315167 | orchestrator | 2025-08-29 14:43:48.315178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315189 | orchestrator | Friday 29 August 2025 14:43:43 +0000 (0:00:00.206) 0:00:02.754 ********* 2025-08-29 14:43:48.315200 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315210 | orchestrator | 2025-08-29 14:43:48.315221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315232 | orchestrator | Friday 29 August 2025 14:43:43 +0000 (0:00:00.222) 0:00:02.977 ********* 2025-08-29 14:43:48.315243 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315254 | orchestrator | 2025-08-29 14:43:48.315264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315275 | orchestrator | Friday 29 August 2025 14:43:43 +0000 (0:00:00.209) 0:00:03.187 ********* 2025-08-29 14:43:48.315286 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f) 2025-08-29 14:43:48.315298 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f) 2025-08-29 14:43:48.315309 | orchestrator | 2025-08-29 14:43:48.315320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315331 | orchestrator | Friday 29 August 2025 14:43:43 +0000 (0:00:00.413) 0:00:03.600 ********* 2025-08-29 14:43:48.315363 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d) 2025-08-29 14:43:48.315375 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d) 2025-08-29 14:43:48.315386 | orchestrator | 2025-08-29 14:43:48.315396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315407 | orchestrator | Friday 29 August 2025 14:43:44 +0000 (0:00:00.417) 0:00:04.017 ********* 2025-08-29 14:43:48.315418 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e) 2025-08-29 14:43:48.315428 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e) 2025-08-29 14:43:48.315439 | orchestrator | 2025-08-29 14:43:48.315450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315461 | orchestrator | Friday 29 August 2025 14:43:44 +0000 (0:00:00.639) 0:00:04.657 ********* 2025-08-29 14:43:48.315471 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba) 2025-08-29 14:43:48.315490 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba) 2025-08-29 14:43:48.315501 | orchestrator | 2025-08-29 14:43:48.315512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:48.315523 | orchestrator | Friday 29 August 2025 14:43:45 +0000 (0:00:00.622) 0:00:05.279 ********* 2025-08-29 14:43:48.315533 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:43:48.315544 | orchestrator | 2025-08-29 14:43:48.315555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315572 | orchestrator | Friday 29 August 2025 14:43:46 +0000 (0:00:00.731) 0:00:06.011 ********* 2025-08-29 14:43:48.315583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:43:48.315593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:43:48.315604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:43:48.315614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:43:48.315625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:43:48.315636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:43:48.315646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:43:48.315657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:43:48.315668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:43:48.315678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:43:48.315689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:43:48.315699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:43:48.315710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:43:48.315721 | orchestrator | 2025-08-29 14:43:48.315731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315742 | orchestrator | Friday 29 August 2025 14:43:46 +0000 (0:00:00.365) 0:00:06.376 ********* 2025-08-29 14:43:48.315753 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315763 | orchestrator | 2025-08-29 14:43:48.315774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315785 | orchestrator | Friday 29 August 2025 14:43:46 +0000 (0:00:00.207) 0:00:06.584 ********* 2025-08-29 14:43:48.315795 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315806 | orchestrator | 2025-08-29 14:43:48.315817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315828 | orchestrator | Friday 29 August 2025 14:43:47 +0000 (0:00:00.210) 0:00:06.794 ********* 2025-08-29 14:43:48.315838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315849 | orchestrator | 2025-08-29 14:43:48.315860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315870 | orchestrator | Friday 29 August 2025 14:43:47 +0000 (0:00:00.207) 0:00:07.002 ********* 2025-08-29 14:43:48.315881 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315892 | orchestrator | 2025-08-29 14:43:48.315903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315913 | orchestrator | Friday 29 August 2025 14:43:47 +0000 (0:00:00.191) 0:00:07.193 ********* 2025-08-29 14:43:48.315924 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315935 | orchestrator | 2025-08-29 14:43:48.315946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.315962 | orchestrator | Friday 29 August 2025 14:43:47 +0000 (0:00:00.220) 0:00:07.414 ********* 2025-08-29 14:43:48.315973 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.315984 | orchestrator | 2025-08-29 14:43:48.315995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.316006 | orchestrator | Friday 29 August 2025 14:43:47 +0000 (0:00:00.214) 0:00:07.629 ********* 2025-08-29 14:43:48.316017 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:48.316027 | orchestrator | 2025-08-29 14:43:48.316038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:48.316049 | orchestrator | Friday 29 August 2025 14:43:48 +0000 (0:00:00.192) 0:00:07.821 ********* 2025-08-29 14:43:48.316066 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.741918 | orchestrator | 2025-08-29 14:43:55.742069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:55.742088 | orchestrator | Friday 29 August 2025 14:43:48 +0000 (0:00:00.202) 0:00:08.023 ********* 2025-08-29 14:43:55.742100 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:43:55.742112 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:43:55.742123 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:43:55.742133 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:43:55.742167 | orchestrator | 2025-08-29 14:43:55.742178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:55.742189 | orchestrator | Friday 29 August 2025 14:43:49 +0000 (0:00:01.043) 0:00:09.067 ********* 2025-08-29 14:43:55.742199 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742209 | orchestrator | 2025-08-29 14:43:55.742219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:55.742229 | orchestrator | Friday 29 August 2025 14:43:49 +0000 (0:00:00.196) 0:00:09.264 ********* 2025-08-29 14:43:55.742238 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742248 | orchestrator | 2025-08-29 14:43:55.742258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:55.742268 | orchestrator | Friday 29 August 2025 14:43:49 +0000 (0:00:00.194) 0:00:09.458 ********* 2025-08-29 14:43:55.742277 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742287 | orchestrator | 2025-08-29 14:43:55.742296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:55.742306 | orchestrator | Friday 29 August 2025 14:43:49 +0000 (0:00:00.202) 0:00:09.661 ********* 2025-08-29 14:43:55.742316 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742325 | orchestrator | 2025-08-29 14:43:55.742335 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:43:55.742344 | orchestrator | Friday 29 August 2025 14:43:50 +0000 (0:00:00.201) 0:00:09.862 ********* 2025-08-29 14:43:55.742354 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:43:55.742364 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:43:55.742374 | orchestrator | 2025-08-29 14:43:55.742383 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:43:55.742393 | orchestrator | Friday 29 August 2025 14:43:50 +0000 (0:00:00.170) 0:00:10.033 ********* 2025-08-29 14:43:55.742421 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742431 | orchestrator | 2025-08-29 14:43:55.742441 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:43:55.742450 | orchestrator | Friday 29 August 2025 14:43:50 +0000 (0:00:00.144) 0:00:10.178 ********* 2025-08-29 14:43:55.742460 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742473 | orchestrator | 2025-08-29 14:43:55.742484 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:43:55.742495 | orchestrator | Friday 29 August 2025 14:43:50 +0000 (0:00:00.135) 0:00:10.313 ********* 2025-08-29 14:43:55.742505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742516 | orchestrator | 2025-08-29 14:43:55.742547 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:43:55.742558 | orchestrator | Friday 29 August 2025 14:43:50 +0000 (0:00:00.140) 0:00:10.453 ********* 2025-08-29 14:43:55.742569 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:55.742580 | orchestrator | 2025-08-29 14:43:55.742590 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:43:55.742600 | orchestrator | Friday 29 August 2025 14:43:50 +0000 (0:00:00.141) 0:00:10.595 ********* 2025-08-29 14:43:55.742612 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95143370-f7d7-5ec5-ad3d-8af7ad027df9'}}) 2025-08-29 14:43:55.742622 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a5a082ef-4dec-5d63-a984-4d3e57643ca0'}}) 2025-08-29 14:43:55.742633 | orchestrator | 2025-08-29 14:43:55.742644 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:43:55.742654 | orchestrator | Friday 29 August 2025 14:43:51 +0000 (0:00:00.168) 0:00:10.764 ********* 2025-08-29 14:43:55.742666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95143370-f7d7-5ec5-ad3d-8af7ad027df9'}})  2025-08-29 14:43:55.742684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a5a082ef-4dec-5d63-a984-4d3e57643ca0'}})  2025-08-29 14:43:55.742695 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742706 | orchestrator | 2025-08-29 14:43:55.742717 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:43:55.742728 | orchestrator | Friday 29 August 2025 14:43:51 +0000 (0:00:00.166) 0:00:10.930 ********* 2025-08-29 14:43:55.742738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95143370-f7d7-5ec5-ad3d-8af7ad027df9'}})  2025-08-29 14:43:55.742748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a5a082ef-4dec-5d63-a984-4d3e57643ca0'}})  2025-08-29 14:43:55.742759 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742769 | orchestrator | 2025-08-29 14:43:55.742780 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:43:55.742791 | orchestrator | Friday 29 August 2025 14:43:51 +0000 (0:00:00.140) 0:00:11.071 ********* 2025-08-29 14:43:55.742801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95143370-f7d7-5ec5-ad3d-8af7ad027df9'}})  2025-08-29 14:43:55.742812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a5a082ef-4dec-5d63-a984-4d3e57643ca0'}})  2025-08-29 14:43:55.742823 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742833 | orchestrator | 2025-08-29 14:43:55.742859 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:43:55.742869 | orchestrator | Friday 29 August 2025 14:43:51 +0000 (0:00:00.361) 0:00:11.433 ********* 2025-08-29 14:43:55.742878 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:55.742888 | orchestrator | 2025-08-29 14:43:55.742903 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:43:55.742913 | orchestrator | Friday 29 August 2025 14:43:51 +0000 (0:00:00.157) 0:00:11.590 ********* 2025-08-29 14:43:55.742923 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:55.742932 | orchestrator | 2025-08-29 14:43:55.742942 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:43:55.742951 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.139) 0:00:11.730 ********* 2025-08-29 14:43:55.742961 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.742970 | orchestrator | 2025-08-29 14:43:55.742980 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:43:55.742989 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.127) 0:00:11.858 ********* 2025-08-29 14:43:55.742999 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.743008 | orchestrator | 2025-08-29 14:43:55.743018 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:43:55.743035 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.120) 0:00:11.978 ********* 2025-08-29 14:43:55.743045 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.743055 | orchestrator | 2025-08-29 14:43:55.743064 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:43:55.743074 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.130) 0:00:12.109 ********* 2025-08-29 14:43:55.743084 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:43:55.743093 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:55.743103 | orchestrator |  "sdb": { 2025-08-29 14:43:55.743112 | orchestrator |  "osd_lvm_uuid": "95143370-f7d7-5ec5-ad3d-8af7ad027df9" 2025-08-29 14:43:55.743122 | orchestrator |  }, 2025-08-29 14:43:55.743132 | orchestrator |  "sdc": { 2025-08-29 14:43:55.743142 | orchestrator |  "osd_lvm_uuid": "a5a082ef-4dec-5d63-a984-4d3e57643ca0" 2025-08-29 14:43:55.743178 | orchestrator |  } 2025-08-29 14:43:55.743194 | orchestrator |  } 2025-08-29 14:43:55.743209 | orchestrator | } 2025-08-29 14:43:55.743226 | orchestrator | 2025-08-29 14:43:55.743241 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:43:55.743256 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.141) 0:00:12.251 ********* 2025-08-29 14:43:55.743269 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.743284 | orchestrator | 2025-08-29 14:43:55.743300 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:43:55.743316 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.132) 0:00:12.383 ********* 2025-08-29 14:43:55.743330 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.743340 | orchestrator | 2025-08-29 14:43:55.743350 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:43:55.743359 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.121) 0:00:12.505 ********* 2025-08-29 14:43:55.743369 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.743378 | orchestrator | 2025-08-29 14:43:55.743388 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:43:55.743397 | orchestrator | Friday 29 August 2025 14:43:52 +0000 (0:00:00.120) 0:00:12.625 ********* 2025-08-29 14:43:55.743407 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 14:43:55.743416 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:43:55.743426 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:55.743436 | orchestrator |  "sdb": { 2025-08-29 14:43:55.743445 | orchestrator |  "osd_lvm_uuid": "95143370-f7d7-5ec5-ad3d-8af7ad027df9" 2025-08-29 14:43:55.743455 | orchestrator |  }, 2025-08-29 14:43:55.743465 | orchestrator |  "sdc": { 2025-08-29 14:43:55.743475 | orchestrator |  "osd_lvm_uuid": "a5a082ef-4dec-5d63-a984-4d3e57643ca0" 2025-08-29 14:43:55.743484 | orchestrator |  } 2025-08-29 14:43:55.743494 | orchestrator |  }, 2025-08-29 14:43:55.743503 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:43:55.743513 | orchestrator |  { 2025-08-29 14:43:55.743522 | orchestrator |  "data": "osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9", 2025-08-29 14:43:55.743532 | orchestrator |  "data_vg": "ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9" 2025-08-29 14:43:55.743542 | orchestrator |  }, 2025-08-29 14:43:55.743551 | orchestrator |  { 2025-08-29 14:43:55.743560 | orchestrator |  "data": "osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0", 2025-08-29 14:43:55.743570 | orchestrator |  "data_vg": "ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0" 2025-08-29 14:43:55.743580 | orchestrator |  } 2025-08-29 14:43:55.743589 | orchestrator |  ] 2025-08-29 14:43:55.743599 | orchestrator |  } 2025-08-29 14:43:55.743608 | orchestrator | } 2025-08-29 14:43:55.743618 | orchestrator | 2025-08-29 14:43:55.743633 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:43:55.743644 | orchestrator | Friday 29 August 2025 14:43:53 +0000 (0:00:00.193) 0:00:12.819 ********* 2025-08-29 14:43:55.743661 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:55.743671 | orchestrator | 2025-08-29 14:43:55.743680 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:43:55.743690 | orchestrator | 2025-08-29 14:43:55.743700 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:43:55.743709 | orchestrator | Friday 29 August 2025 14:43:55 +0000 (0:00:02.144) 0:00:14.963 ********* 2025-08-29 14:43:55.743719 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:55.743728 | orchestrator | 2025-08-29 14:43:55.743738 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:43:55.743747 | orchestrator | Friday 29 August 2025 14:43:55 +0000 (0:00:00.247) 0:00:15.211 ********* 2025-08-29 14:43:55.743757 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:55.743766 | orchestrator | 2025-08-29 14:43:55.743776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:55.743795 | orchestrator | Friday 29 August 2025 14:43:55 +0000 (0:00:00.235) 0:00:15.447 ********* 2025-08-29 14:44:02.070287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:44:02.070382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:44:02.070398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:44:02.070410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:44:02.070421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:44:02.070432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:44:02.070443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:44:02.070454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:44:02.070465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:44:02.070476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:44:02.070487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:44:02.070498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:44:02.070509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:44:02.070520 | orchestrator | 2025-08-29 14:44:02.070536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070547 | orchestrator | Friday 29 August 2025 14:43:56 +0000 (0:00:00.395) 0:00:15.842 ********* 2025-08-29 14:44:02.070559 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070571 | orchestrator | 2025-08-29 14:44:02.070583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070594 | orchestrator | Friday 29 August 2025 14:43:56 +0000 (0:00:00.195) 0:00:16.037 ********* 2025-08-29 14:44:02.070605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070616 | orchestrator | 2025-08-29 14:44:02.070627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070639 | orchestrator | Friday 29 August 2025 14:43:56 +0000 (0:00:00.185) 0:00:16.223 ********* 2025-08-29 14:44:02.070650 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070661 | orchestrator | 2025-08-29 14:44:02.070672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070683 | orchestrator | Friday 29 August 2025 14:43:56 +0000 (0:00:00.175) 0:00:16.398 ********* 2025-08-29 14:44:02.070694 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070705 | orchestrator | 2025-08-29 14:44:02.070737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070749 | orchestrator | Friday 29 August 2025 14:43:56 +0000 (0:00:00.159) 0:00:16.558 ********* 2025-08-29 14:44:02.070760 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070771 | orchestrator | 2025-08-29 14:44:02.070781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070792 | orchestrator | Friday 29 August 2025 14:43:57 +0000 (0:00:00.159) 0:00:16.718 ********* 2025-08-29 14:44:02.070803 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070814 | orchestrator | 2025-08-29 14:44:02.070825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070852 | orchestrator | Friday 29 August 2025 14:43:57 +0000 (0:00:00.420) 0:00:17.138 ********* 2025-08-29 14:44:02.070865 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070877 | orchestrator | 2025-08-29 14:44:02.070890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070902 | orchestrator | Friday 29 August 2025 14:43:57 +0000 (0:00:00.167) 0:00:17.305 ********* 2025-08-29 14:44:02.070915 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.070927 | orchestrator | 2025-08-29 14:44:02.070941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.070954 | orchestrator | Friday 29 August 2025 14:43:57 +0000 (0:00:00.165) 0:00:17.471 ********* 2025-08-29 14:44:02.070967 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd) 2025-08-29 14:44:02.070978 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd) 2025-08-29 14:44:02.070989 | orchestrator | 2025-08-29 14:44:02.071000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.071011 | orchestrator | Friday 29 August 2025 14:43:58 +0000 (0:00:00.326) 0:00:17.798 ********* 2025-08-29 14:44:02.071022 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8) 2025-08-29 14:44:02.071034 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8) 2025-08-29 14:44:02.071044 | orchestrator | 2025-08-29 14:44:02.071055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.071066 | orchestrator | Friday 29 August 2025 14:43:58 +0000 (0:00:00.328) 0:00:18.127 ********* 2025-08-29 14:44:02.071077 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048) 2025-08-29 14:44:02.071088 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048) 2025-08-29 14:44:02.071099 | orchestrator | 2025-08-29 14:44:02.071110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.071121 | orchestrator | Friday 29 August 2025 14:43:58 +0000 (0:00:00.308) 0:00:18.435 ********* 2025-08-29 14:44:02.071147 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008) 2025-08-29 14:44:02.071192 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008) 2025-08-29 14:44:02.071203 | orchestrator | 2025-08-29 14:44:02.071214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:02.071226 | orchestrator | Friday 29 August 2025 14:43:59 +0000 (0:00:00.325) 0:00:18.761 ********* 2025-08-29 14:44:02.071237 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:44:02.071248 | orchestrator | 2025-08-29 14:44:02.071259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071270 | orchestrator | Friday 29 August 2025 14:43:59 +0000 (0:00:00.246) 0:00:19.008 ********* 2025-08-29 14:44:02.071281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:44:02.071292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:44:02.071312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:44:02.071323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:44:02.071334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:44:02.071345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:44:02.071355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:44:02.071366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:44:02.071377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:44:02.071388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:44:02.071398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:44:02.071409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:44:02.071420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:44:02.071431 | orchestrator | 2025-08-29 14:44:02.071442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071453 | orchestrator | Friday 29 August 2025 14:43:59 +0000 (0:00:00.295) 0:00:19.303 ********* 2025-08-29 14:44:02.071463 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071474 | orchestrator | 2025-08-29 14:44:02.071485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071496 | orchestrator | Friday 29 August 2025 14:43:59 +0000 (0:00:00.143) 0:00:19.446 ********* 2025-08-29 14:44:02.071507 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071517 | orchestrator | 2025-08-29 14:44:02.071534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071545 | orchestrator | Friday 29 August 2025 14:44:00 +0000 (0:00:00.529) 0:00:19.975 ********* 2025-08-29 14:44:02.071556 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071567 | orchestrator | 2025-08-29 14:44:02.071577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071588 | orchestrator | Friday 29 August 2025 14:44:00 +0000 (0:00:00.154) 0:00:20.130 ********* 2025-08-29 14:44:02.071599 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071610 | orchestrator | 2025-08-29 14:44:02.071621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071632 | orchestrator | Friday 29 August 2025 14:44:00 +0000 (0:00:00.256) 0:00:20.387 ********* 2025-08-29 14:44:02.071643 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071654 | orchestrator | 2025-08-29 14:44:02.071665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071676 | orchestrator | Friday 29 August 2025 14:44:00 +0000 (0:00:00.202) 0:00:20.589 ********* 2025-08-29 14:44:02.071687 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071697 | orchestrator | 2025-08-29 14:44:02.071708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071719 | orchestrator | Friday 29 August 2025 14:44:01 +0000 (0:00:00.169) 0:00:20.758 ********* 2025-08-29 14:44:02.071730 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071741 | orchestrator | 2025-08-29 14:44:02.071751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071762 | orchestrator | Friday 29 August 2025 14:44:01 +0000 (0:00:00.148) 0:00:20.907 ********* 2025-08-29 14:44:02.071773 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071784 | orchestrator | 2025-08-29 14:44:02.071795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071812 | orchestrator | Friday 29 August 2025 14:44:01 +0000 (0:00:00.205) 0:00:21.112 ********* 2025-08-29 14:44:02.071823 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:44:02.071834 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:44:02.071845 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:44:02.071856 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:44:02.071867 | orchestrator | 2025-08-29 14:44:02.071877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:02.071888 | orchestrator | Friday 29 August 2025 14:44:01 +0000 (0:00:00.517) 0:00:21.629 ********* 2025-08-29 14:44:02.071899 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:02.071910 | orchestrator | 2025-08-29 14:44:02.071928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:07.490549 | orchestrator | Friday 29 August 2025 14:44:02 +0000 (0:00:00.151) 0:00:21.781 ********* 2025-08-29 14:44:07.490647 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.490662 | orchestrator | 2025-08-29 14:44:07.490675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:07.490686 | orchestrator | Friday 29 August 2025 14:44:02 +0000 (0:00:00.158) 0:00:21.939 ********* 2025-08-29 14:44:07.490697 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.490708 | orchestrator | 2025-08-29 14:44:07.490719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:07.490730 | orchestrator | Friday 29 August 2025 14:44:02 +0000 (0:00:00.156) 0:00:22.095 ********* 2025-08-29 14:44:07.490741 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.490752 | orchestrator | 2025-08-29 14:44:07.490763 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:44:07.490774 | orchestrator | Friday 29 August 2025 14:44:02 +0000 (0:00:00.144) 0:00:22.239 ********* 2025-08-29 14:44:07.490785 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:44:07.490796 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:44:07.490807 | orchestrator | 2025-08-29 14:44:07.490818 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:44:07.490829 | orchestrator | Friday 29 August 2025 14:44:02 +0000 (0:00:00.262) 0:00:22.502 ********* 2025-08-29 14:44:07.490840 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.490851 | orchestrator | 2025-08-29 14:44:07.490862 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:44:07.490873 | orchestrator | Friday 29 August 2025 14:44:02 +0000 (0:00:00.112) 0:00:22.614 ********* 2025-08-29 14:44:07.490884 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.490895 | orchestrator | 2025-08-29 14:44:07.490906 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:44:07.490917 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.134) 0:00:22.749 ********* 2025-08-29 14:44:07.490928 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.490939 | orchestrator | 2025-08-29 14:44:07.490949 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:44:07.490960 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.130) 0:00:22.879 ********* 2025-08-29 14:44:07.490971 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:44:07.490982 | orchestrator | 2025-08-29 14:44:07.490993 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:44:07.491004 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.128) 0:00:23.007 ********* 2025-08-29 14:44:07.491015 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}}) 2025-08-29 14:44:07.491026 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}}) 2025-08-29 14:44:07.491037 | orchestrator | 2025-08-29 14:44:07.491048 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:44:07.491078 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.150) 0:00:23.158 ********* 2025-08-29 14:44:07.491090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}})  2025-08-29 14:44:07.491101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}})  2025-08-29 14:44:07.491114 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491126 | orchestrator | 2025-08-29 14:44:07.491182 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:44:07.491198 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.115) 0:00:23.273 ********* 2025-08-29 14:44:07.491210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}})  2025-08-29 14:44:07.491223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}})  2025-08-29 14:44:07.491235 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491247 | orchestrator | 2025-08-29 14:44:07.491260 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:44:07.491272 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.152) 0:00:23.426 ********* 2025-08-29 14:44:07.491285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}})  2025-08-29 14:44:07.491297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}})  2025-08-29 14:44:07.491310 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491323 | orchestrator | 2025-08-29 14:44:07.491336 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:44:07.491348 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.121) 0:00:23.547 ********* 2025-08-29 14:44:07.491360 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:44:07.491373 | orchestrator | 2025-08-29 14:44:07.491385 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:44:07.491397 | orchestrator | Friday 29 August 2025 14:44:03 +0000 (0:00:00.120) 0:00:23.667 ********* 2025-08-29 14:44:07.491409 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:44:07.491420 | orchestrator | 2025-08-29 14:44:07.491433 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:44:07.491447 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.123) 0:00:23.790 ********* 2025-08-29 14:44:07.491459 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491470 | orchestrator | 2025-08-29 14:44:07.491497 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:44:07.491508 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.121) 0:00:23.912 ********* 2025-08-29 14:44:07.491519 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491530 | orchestrator | 2025-08-29 14:44:07.491541 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:44:07.491551 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.250) 0:00:24.163 ********* 2025-08-29 14:44:07.491562 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491573 | orchestrator | 2025-08-29 14:44:07.491583 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:44:07.491594 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.117) 0:00:24.281 ********* 2025-08-29 14:44:07.491605 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:44:07.491616 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:44:07.491627 | orchestrator |  "sdb": { 2025-08-29 14:44:07.491638 | orchestrator |  "osd_lvm_uuid": "2496fa80-0e44-5b7b-b63b-c9ee5061ab12" 2025-08-29 14:44:07.491649 | orchestrator |  }, 2025-08-29 14:44:07.491660 | orchestrator |  "sdc": { 2025-08-29 14:44:07.491670 | orchestrator |  "osd_lvm_uuid": "b3a0840c-f726-58e7-9fb9-c9f22cb6ab63" 2025-08-29 14:44:07.491690 | orchestrator |  } 2025-08-29 14:44:07.491701 | orchestrator |  } 2025-08-29 14:44:07.491712 | orchestrator | } 2025-08-29 14:44:07.491723 | orchestrator | 2025-08-29 14:44:07.491734 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:44:07.491745 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.143) 0:00:24.425 ********* 2025-08-29 14:44:07.491755 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491766 | orchestrator | 2025-08-29 14:44:07.491777 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:44:07.491787 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.145) 0:00:24.570 ********* 2025-08-29 14:44:07.491798 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491809 | orchestrator | 2025-08-29 14:44:07.491819 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:44:07.491830 | orchestrator | Friday 29 August 2025 14:44:04 +0000 (0:00:00.126) 0:00:24.696 ********* 2025-08-29 14:44:07.491841 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:07.491851 | orchestrator | 2025-08-29 14:44:07.491862 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:44:07.491873 | orchestrator | Friday 29 August 2025 14:44:05 +0000 (0:00:00.139) 0:00:24.836 ********* 2025-08-29 14:44:07.491884 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 14:44:07.491894 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:44:07.491906 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:44:07.491916 | orchestrator |  "sdb": { 2025-08-29 14:44:07.491927 | orchestrator |  "osd_lvm_uuid": "2496fa80-0e44-5b7b-b63b-c9ee5061ab12" 2025-08-29 14:44:07.491938 | orchestrator |  }, 2025-08-29 14:44:07.491949 | orchestrator |  "sdc": { 2025-08-29 14:44:07.491960 | orchestrator |  "osd_lvm_uuid": "b3a0840c-f726-58e7-9fb9-c9f22cb6ab63" 2025-08-29 14:44:07.491971 | orchestrator |  } 2025-08-29 14:44:07.491981 | orchestrator |  }, 2025-08-29 14:44:07.491992 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:44:07.492003 | orchestrator |  { 2025-08-29 14:44:07.492014 | orchestrator |  "data": "osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12", 2025-08-29 14:44:07.492025 | orchestrator |  "data_vg": "ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12" 2025-08-29 14:44:07.492036 | orchestrator |  }, 2025-08-29 14:44:07.492046 | orchestrator |  { 2025-08-29 14:44:07.492057 | orchestrator |  "data": "osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63", 2025-08-29 14:44:07.492068 | orchestrator |  "data_vg": "ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63" 2025-08-29 14:44:07.492078 | orchestrator |  } 2025-08-29 14:44:07.492089 | orchestrator |  ] 2025-08-29 14:44:07.492100 | orchestrator |  } 2025-08-29 14:44:07.492110 | orchestrator | } 2025-08-29 14:44:07.492121 | orchestrator | 2025-08-29 14:44:07.492132 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:44:07.492143 | orchestrator | Friday 29 August 2025 14:44:05 +0000 (0:00:00.187) 0:00:25.023 ********* 2025-08-29 14:44:07.492172 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:44:07.492185 | orchestrator | 2025-08-29 14:44:07.492196 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:44:07.492207 | orchestrator | 2025-08-29 14:44:07.492218 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:44:07.492228 | orchestrator | Friday 29 August 2025 14:44:06 +0000 (0:00:00.977) 0:00:26.000 ********* 2025-08-29 14:44:07.492239 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:44:07.492250 | orchestrator | 2025-08-29 14:44:07.492261 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:44:07.492271 | orchestrator | Friday 29 August 2025 14:44:06 +0000 (0:00:00.352) 0:00:26.353 ********* 2025-08-29 14:44:07.492282 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:44:07.492300 | orchestrator | 2025-08-29 14:44:07.492316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:07.492327 | orchestrator | Friday 29 August 2025 14:44:07 +0000 (0:00:00.461) 0:00:26.815 ********* 2025-08-29 14:44:07.492339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:44:07.492349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:44:07.492360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:44:07.492371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:44:07.492381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:44:07.492392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:44:07.492409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:44:14.771127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:44:14.771233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:44:14.771247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:44:14.771258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:44:14.771269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:44:14.771280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:44:14.771292 | orchestrator | 2025-08-29 14:44:14.771304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771315 | orchestrator | Friday 29 August 2025 14:44:07 +0000 (0:00:00.381) 0:00:27.197 ********* 2025-08-29 14:44:14.771326 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771338 | orchestrator | 2025-08-29 14:44:14.771349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771360 | orchestrator | Friday 29 August 2025 14:44:07 +0000 (0:00:00.183) 0:00:27.380 ********* 2025-08-29 14:44:14.771371 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771382 | orchestrator | 2025-08-29 14:44:14.771392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771403 | orchestrator | Friday 29 August 2025 14:44:07 +0000 (0:00:00.174) 0:00:27.554 ********* 2025-08-29 14:44:14.771414 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771425 | orchestrator | 2025-08-29 14:44:14.771435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771446 | orchestrator | Friday 29 August 2025 14:44:08 +0000 (0:00:00.188) 0:00:27.743 ********* 2025-08-29 14:44:14.771457 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771468 | orchestrator | 2025-08-29 14:44:14.771480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771490 | orchestrator | Friday 29 August 2025 14:44:08 +0000 (0:00:00.170) 0:00:27.913 ********* 2025-08-29 14:44:14.771501 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771512 | orchestrator | 2025-08-29 14:44:14.771523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771534 | orchestrator | Friday 29 August 2025 14:44:08 +0000 (0:00:00.183) 0:00:28.097 ********* 2025-08-29 14:44:14.771545 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771556 | orchestrator | 2025-08-29 14:44:14.771566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771578 | orchestrator | Friday 29 August 2025 14:44:08 +0000 (0:00:00.174) 0:00:28.272 ********* 2025-08-29 14:44:14.771588 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771599 | orchestrator | 2025-08-29 14:44:14.771629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771640 | orchestrator | Friday 29 August 2025 14:44:08 +0000 (0:00:00.173) 0:00:28.445 ********* 2025-08-29 14:44:14.771651 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.771662 | orchestrator | 2025-08-29 14:44:14.771673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771684 | orchestrator | Friday 29 August 2025 14:44:08 +0000 (0:00:00.173) 0:00:28.618 ********* 2025-08-29 14:44:14.771697 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0) 2025-08-29 14:44:14.771710 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0) 2025-08-29 14:44:14.771722 | orchestrator | 2025-08-29 14:44:14.771735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771747 | orchestrator | Friday 29 August 2025 14:44:09 +0000 (0:00:00.544) 0:00:29.163 ********* 2025-08-29 14:44:14.771759 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166) 2025-08-29 14:44:14.771771 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166) 2025-08-29 14:44:14.771783 | orchestrator | 2025-08-29 14:44:14.771795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771807 | orchestrator | Friday 29 August 2025 14:44:10 +0000 (0:00:00.912) 0:00:30.076 ********* 2025-08-29 14:44:14.771820 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf) 2025-08-29 14:44:14.771832 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf) 2025-08-29 14:44:14.771844 | orchestrator | 2025-08-29 14:44:14.771856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771868 | orchestrator | Friday 29 August 2025 14:44:10 +0000 (0:00:00.452) 0:00:30.528 ********* 2025-08-29 14:44:14.771880 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a) 2025-08-29 14:44:14.771892 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a) 2025-08-29 14:44:14.771904 | orchestrator | 2025-08-29 14:44:14.771916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:14.771927 | orchestrator | Friday 29 August 2025 14:44:11 +0000 (0:00:00.552) 0:00:31.081 ********* 2025-08-29 14:44:14.771940 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:44:14.771951 | orchestrator | 2025-08-29 14:44:14.771963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.771975 | orchestrator | Friday 29 August 2025 14:44:11 +0000 (0:00:00.436) 0:00:31.517 ********* 2025-08-29 14:44:14.772002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:44:14.772016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:44:14.772028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:44:14.772040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:44:14.772051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:44:14.772062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:44:14.772087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:44:14.772098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:44:14.772109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:44:14.772127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:44:14.772138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:44:14.772149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:44:14.772177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:44:14.772189 | orchestrator | 2025-08-29 14:44:14.772200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772211 | orchestrator | Friday 29 August 2025 14:44:12 +0000 (0:00:00.429) 0:00:31.946 ********* 2025-08-29 14:44:14.772222 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772233 | orchestrator | 2025-08-29 14:44:14.772243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772254 | orchestrator | Friday 29 August 2025 14:44:12 +0000 (0:00:00.141) 0:00:32.088 ********* 2025-08-29 14:44:14.772265 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772276 | orchestrator | 2025-08-29 14:44:14.772287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772298 | orchestrator | Friday 29 August 2025 14:44:12 +0000 (0:00:00.140) 0:00:32.229 ********* 2025-08-29 14:44:14.772308 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772319 | orchestrator | 2025-08-29 14:44:14.772330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772346 | orchestrator | Friday 29 August 2025 14:44:12 +0000 (0:00:00.141) 0:00:32.370 ********* 2025-08-29 14:44:14.772357 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772368 | orchestrator | 2025-08-29 14:44:14.772378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772389 | orchestrator | Friday 29 August 2025 14:44:12 +0000 (0:00:00.141) 0:00:32.511 ********* 2025-08-29 14:44:14.772400 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772411 | orchestrator | 2025-08-29 14:44:14.772421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772432 | orchestrator | Friday 29 August 2025 14:44:12 +0000 (0:00:00.145) 0:00:32.657 ********* 2025-08-29 14:44:14.772443 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772454 | orchestrator | 2025-08-29 14:44:14.772464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772475 | orchestrator | Friday 29 August 2025 14:44:13 +0000 (0:00:00.415) 0:00:33.072 ********* 2025-08-29 14:44:14.772486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772497 | orchestrator | 2025-08-29 14:44:14.772507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772518 | orchestrator | Friday 29 August 2025 14:44:13 +0000 (0:00:00.186) 0:00:33.259 ********* 2025-08-29 14:44:14.772529 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772540 | orchestrator | 2025-08-29 14:44:14.772551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772561 | orchestrator | Friday 29 August 2025 14:44:13 +0000 (0:00:00.133) 0:00:33.392 ********* 2025-08-29 14:44:14.772572 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:44:14.772583 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:44:14.772594 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:44:14.772605 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:44:14.772616 | orchestrator | 2025-08-29 14:44:14.772627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772637 | orchestrator | Friday 29 August 2025 14:44:14 +0000 (0:00:00.517) 0:00:33.910 ********* 2025-08-29 14:44:14.772648 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772659 | orchestrator | 2025-08-29 14:44:14.772670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772681 | orchestrator | Friday 29 August 2025 14:44:14 +0000 (0:00:00.142) 0:00:34.052 ********* 2025-08-29 14:44:14.772697 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772708 | orchestrator | 2025-08-29 14:44:14.772719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772730 | orchestrator | Friday 29 August 2025 14:44:14 +0000 (0:00:00.145) 0:00:34.198 ********* 2025-08-29 14:44:14.772741 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772751 | orchestrator | 2025-08-29 14:44:14.772762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:14.772773 | orchestrator | Friday 29 August 2025 14:44:14 +0000 (0:00:00.138) 0:00:34.337 ********* 2025-08-29 14:44:14.772784 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:14.772795 | orchestrator | 2025-08-29 14:44:14.772806 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:44:14.772822 | orchestrator | Friday 29 August 2025 14:44:14 +0000 (0:00:00.146) 0:00:34.484 ********* 2025-08-29 14:44:18.173230 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:44:18.173904 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:44:18.173935 | orchestrator | 2025-08-29 14:44:18.173949 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:44:18.173962 | orchestrator | Friday 29 August 2025 14:44:14 +0000 (0:00:00.141) 0:00:34.625 ********* 2025-08-29 14:44:18.173974 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.173987 | orchestrator | 2025-08-29 14:44:18.173999 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:44:18.174012 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.133) 0:00:34.758 ********* 2025-08-29 14:44:18.174077 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174090 | orchestrator | 2025-08-29 14:44:18.174101 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:44:18.174112 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.092) 0:00:34.851 ********* 2025-08-29 14:44:18.174122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174133 | orchestrator | 2025-08-29 14:44:18.174144 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:44:18.174155 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.093) 0:00:34.944 ********* 2025-08-29 14:44:18.174191 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:44:18.174203 | orchestrator | 2025-08-29 14:44:18.174213 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:44:18.174224 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.220) 0:00:35.164 ********* 2025-08-29 14:44:18.174235 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf1413fe-a30b-500c-b995-d4125007de3c'}}) 2025-08-29 14:44:18.174247 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}}) 2025-08-29 14:44:18.174258 | orchestrator | 2025-08-29 14:44:18.174269 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:44:18.174280 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.141) 0:00:35.306 ********* 2025-08-29 14:44:18.174291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf1413fe-a30b-500c-b995-d4125007de3c'}})  2025-08-29 14:44:18.174303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}})  2025-08-29 14:44:18.174314 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174325 | orchestrator | 2025-08-29 14:44:18.174336 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:44:18.174347 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.138) 0:00:35.444 ********* 2025-08-29 14:44:18.174357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf1413fe-a30b-500c-b995-d4125007de3c'}})  2025-08-29 14:44:18.174368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}})  2025-08-29 14:44:18.174398 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174409 | orchestrator | 2025-08-29 14:44:18.174420 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:44:18.174431 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.122) 0:00:35.566 ********* 2025-08-29 14:44:18.174442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf1413fe-a30b-500c-b995-d4125007de3c'}})  2025-08-29 14:44:18.174467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}})  2025-08-29 14:44:18.174479 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174490 | orchestrator | 2025-08-29 14:44:18.174500 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:44:18.174511 | orchestrator | Friday 29 August 2025 14:44:15 +0000 (0:00:00.119) 0:00:35.685 ********* 2025-08-29 14:44:18.174522 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:44:18.174532 | orchestrator | 2025-08-29 14:44:18.174543 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:44:18.174553 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.119) 0:00:35.804 ********* 2025-08-29 14:44:18.174564 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:44:18.174575 | orchestrator | 2025-08-29 14:44:18.174585 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:44:18.174596 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.126) 0:00:35.931 ********* 2025-08-29 14:44:18.174606 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174617 | orchestrator | 2025-08-29 14:44:18.174627 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:44:18.174638 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.122) 0:00:36.053 ********* 2025-08-29 14:44:18.174649 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174660 | orchestrator | 2025-08-29 14:44:18.174670 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:44:18.174681 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.116) 0:00:36.170 ********* 2025-08-29 14:44:18.174692 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174702 | orchestrator | 2025-08-29 14:44:18.174713 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:44:18.174723 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.110) 0:00:36.280 ********* 2025-08-29 14:44:18.174734 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:44:18.174745 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:44:18.174756 | orchestrator |  "sdb": { 2025-08-29 14:44:18.174767 | orchestrator |  "osd_lvm_uuid": "bf1413fe-a30b-500c-b995-d4125007de3c" 2025-08-29 14:44:18.174797 | orchestrator |  }, 2025-08-29 14:44:18.174808 | orchestrator |  "sdc": { 2025-08-29 14:44:18.174819 | orchestrator |  "osd_lvm_uuid": "e997a020-3476-50fd-bfa0-07ccf1b1c8ec" 2025-08-29 14:44:18.174830 | orchestrator |  } 2025-08-29 14:44:18.174841 | orchestrator |  } 2025-08-29 14:44:18.174852 | orchestrator | } 2025-08-29 14:44:18.174863 | orchestrator | 2025-08-29 14:44:18.174922 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:44:18.174935 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.139) 0:00:36.420 ********* 2025-08-29 14:44:18.174946 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.174957 | orchestrator | 2025-08-29 14:44:18.174968 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:44:18.174979 | orchestrator | Friday 29 August 2025 14:44:16 +0000 (0:00:00.105) 0:00:36.525 ********* 2025-08-29 14:44:18.174990 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.175000 | orchestrator | 2025-08-29 14:44:18.175012 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:44:18.175031 | orchestrator | Friday 29 August 2025 14:44:17 +0000 (0:00:00.227) 0:00:36.753 ********* 2025-08-29 14:44:18.175042 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:18.175083 | orchestrator | 2025-08-29 14:44:18.175096 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:44:18.175106 | orchestrator | Friday 29 August 2025 14:44:17 +0000 (0:00:00.116) 0:00:36.869 ********* 2025-08-29 14:44:18.175117 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 14:44:18.175128 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:44:18.175139 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:44:18.175150 | orchestrator |  "sdb": { 2025-08-29 14:44:18.175177 | orchestrator |  "osd_lvm_uuid": "bf1413fe-a30b-500c-b995-d4125007de3c" 2025-08-29 14:44:18.175188 | orchestrator |  }, 2025-08-29 14:44:18.175199 | orchestrator |  "sdc": { 2025-08-29 14:44:18.175210 | orchestrator |  "osd_lvm_uuid": "e997a020-3476-50fd-bfa0-07ccf1b1c8ec" 2025-08-29 14:44:18.175221 | orchestrator |  } 2025-08-29 14:44:18.175231 | orchestrator |  }, 2025-08-29 14:44:18.175242 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:44:18.175253 | orchestrator |  { 2025-08-29 14:44:18.175264 | orchestrator |  "data": "osd-block-bf1413fe-a30b-500c-b995-d4125007de3c", 2025-08-29 14:44:18.175275 | orchestrator |  "data_vg": "ceph-bf1413fe-a30b-500c-b995-d4125007de3c" 2025-08-29 14:44:18.175286 | orchestrator |  }, 2025-08-29 14:44:18.175296 | orchestrator |  { 2025-08-29 14:44:18.175307 | orchestrator |  "data": "osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec", 2025-08-29 14:44:18.175318 | orchestrator |  "data_vg": "ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec" 2025-08-29 14:44:18.175329 | orchestrator |  } 2025-08-29 14:44:18.175340 | orchestrator |  ] 2025-08-29 14:44:18.175350 | orchestrator |  } 2025-08-29 14:44:18.175361 | orchestrator | } 2025-08-29 14:44:18.175376 | orchestrator | 2025-08-29 14:44:18.175387 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:44:18.175398 | orchestrator | Friday 29 August 2025 14:44:17 +0000 (0:00:00.207) 0:00:37.076 ********* 2025-08-29 14:44:18.175409 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:44:18.175420 | orchestrator | 2025-08-29 14:44:18.175431 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:44:18.175442 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:44:18.175453 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:44:18.175465 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:44:18.175475 | orchestrator | 2025-08-29 14:44:18.175486 | orchestrator | 2025-08-29 14:44:18.175497 | orchestrator | 2025-08-29 14:44:18.175508 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:44:18.175519 | orchestrator | Friday 29 August 2025 14:44:18 +0000 (0:00:00.797) 0:00:37.874 ********* 2025-08-29 14:44:18.175530 | orchestrator | =============================================================================== 2025-08-29 14:44:18.175540 | orchestrator | Write configuration file ------------------------------------------------ 3.92s 2025-08-29 14:44:18.175551 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-08-29 14:44:18.175562 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-08-29 14:44:18.175572 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-08-29 14:44:18.175583 | orchestrator | Get initial list of available block devices ----------------------------- 0.94s 2025-08-29 14:44:18.175594 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-08-29 14:44:18.175612 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2025-08-29 14:44:18.175623 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-08-29 14:44:18.175634 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-08-29 14:44:18.175645 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-08-29 14:44:18.175655 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.60s 2025-08-29 14:44:18.175666 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-08-29 14:44:18.175677 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2025-08-29 14:44:18.175688 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-08-29 14:44:18.175707 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-08-29 14:44:18.380847 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2025-08-29 14:44:18.380930 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2025-08-29 14:44:18.380943 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2025-08-29 14:44:18.380955 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2025-08-29 14:44:18.380966 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.49s 2025-08-29 14:44:41.173897 | orchestrator | 2025-08-29 14:44:41 | INFO  | Task 43329733-0d10-43cd-9c31-a47e0c31232e (sync inventory) is running in background. Output coming soon. 2025-08-29 14:44:59.513240 | orchestrator | 2025-08-29 14:44:42 | INFO  | Starting group_vars file reorganization 2025-08-29 14:44:59.513334 | orchestrator | 2025-08-29 14:44:42 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 14:44:59.513350 | orchestrator | 2025-08-29 14:44:42 | INFO  | Group_vars file reorganization completed 2025-08-29 14:44:59.513362 | orchestrator | 2025-08-29 14:44:44 | INFO  | Starting variable preparation from inventory 2025-08-29 14:44:59.513374 | orchestrator | 2025-08-29 14:44:45 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 14:44:59.513385 | orchestrator | 2025-08-29 14:44:45 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 14:44:59.513396 | orchestrator | 2025-08-29 14:44:45 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 14:44:59.513424 | orchestrator | 2025-08-29 14:44:45 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 14:44:59.513436 | orchestrator | 2025-08-29 14:44:45 | INFO  | Variable preparation completed 2025-08-29 14:44:59.513447 | orchestrator | 2025-08-29 14:44:46 | INFO  | Starting inventory overwrite handling 2025-08-29 14:44:59.513458 | orchestrator | 2025-08-29 14:44:46 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 14:44:59.513470 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removing group frr:children from 60-generic 2025-08-29 14:44:59.513486 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 14:44:59.513497 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 14:44:59.513508 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 14:44:59.513519 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 14:44:59.513530 | orchestrator | 2025-08-29 14:44:46 | INFO  | Handling group overwrites in 20-roles 2025-08-29 14:44:59.513541 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 14:44:59.513570 | orchestrator | 2025-08-29 14:44:46 | INFO  | Removed 6 group(s) in total 2025-08-29 14:44:59.513582 | orchestrator | 2025-08-29 14:44:46 | INFO  | Inventory overwrite handling completed 2025-08-29 14:44:59.513593 | orchestrator | 2025-08-29 14:44:48 | INFO  | Starting merge of inventory files 2025-08-29 14:44:59.513603 | orchestrator | 2025-08-29 14:44:48 | INFO  | Inventory files merged successfully 2025-08-29 14:44:59.513614 | orchestrator | 2025-08-29 14:44:52 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 14:44:59.513625 | orchestrator | 2025-08-29 14:44:58 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 14:44:59.513636 | orchestrator | [master 98279f1] 2025-08-29-14-44 2025-08-29 14:44:59.513648 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 14:45:01.344685 | orchestrator | 2025-08-29 14:45:01 | INFO  | Task 731c3fb6-adc1-462b-b6ca-2bcc452271f4 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 14:45:01.344766 | orchestrator | 2025-08-29 14:45:01 | INFO  | It takes a moment until task 731c3fb6-adc1-462b-b6ca-2bcc452271f4 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 14:45:12.221518 | orchestrator | 2025-08-29 14:45:12.221614 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:45:12.221630 | orchestrator | 2025-08-29 14:45:12.221643 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:45:12.221654 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-08-29 14:45:12.221665 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:12.221677 | orchestrator | 2025-08-29 14:45:12.221688 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:45:12.221699 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.233) 0:00:00.509 ********* 2025-08-29 14:45:12.221711 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:12.221722 | orchestrator | 2025-08-29 14:45:12.221734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.221745 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.212) 0:00:00.722 ********* 2025-08-29 14:45:12.221756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:45:12.221767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:45:12.221779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:45:12.221790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:45:12.221801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:45:12.221812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:45:12.221823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:45:12.221834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:45:12.221845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:45:12.221856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:45:12.221868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:45:12.221879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:45:12.221890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:45:12.221900 | orchestrator | 2025-08-29 14:45:12.221911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.221946 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.380) 0:00:01.103 ********* 2025-08-29 14:45:12.221958 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.221969 | orchestrator | 2025-08-29 14:45:12.221980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.221990 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.356) 0:00:01.459 ********* 2025-08-29 14:45:12.222001 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222012 | orchestrator | 2025-08-29 14:45:12.222072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222083 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.178) 0:00:01.637 ********* 2025-08-29 14:45:12.222094 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222105 | orchestrator | 2025-08-29 14:45:12.222116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222126 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.218) 0:00:01.856 ********* 2025-08-29 14:45:12.222137 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222148 | orchestrator | 2025-08-29 14:45:12.222159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222170 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.207) 0:00:02.064 ********* 2025-08-29 14:45:12.222180 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222191 | orchestrator | 2025-08-29 14:45:12.222221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222232 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.221) 0:00:02.286 ********* 2025-08-29 14:45:12.222243 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222253 | orchestrator | 2025-08-29 14:45:12.222264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222275 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.200) 0:00:02.486 ********* 2025-08-29 14:45:12.222285 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222296 | orchestrator | 2025-08-29 14:45:12.222307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222317 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.177) 0:00:02.664 ********* 2025-08-29 14:45:12.222328 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222339 | orchestrator | 2025-08-29 14:45:12.222349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222360 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.184) 0:00:02.848 ********* 2025-08-29 14:45:12.222371 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f) 2025-08-29 14:45:12.222383 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f) 2025-08-29 14:45:12.222394 | orchestrator | 2025-08-29 14:45:12.222405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222415 | orchestrator | Friday 29 August 2025 14:45:08 +0000 (0:00:00.395) 0:00:03.243 ********* 2025-08-29 14:45:12.222444 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d) 2025-08-29 14:45:12.222456 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d) 2025-08-29 14:45:12.222466 | orchestrator | 2025-08-29 14:45:12.222477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222488 | orchestrator | Friday 29 August 2025 14:45:08 +0000 (0:00:00.383) 0:00:03.627 ********* 2025-08-29 14:45:12.222499 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e) 2025-08-29 14:45:12.222510 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e) 2025-08-29 14:45:12.222521 | orchestrator | 2025-08-29 14:45:12.222531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222551 | orchestrator | Friday 29 August 2025 14:45:09 +0000 (0:00:00.554) 0:00:04.181 ********* 2025-08-29 14:45:12.222562 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba) 2025-08-29 14:45:12.222573 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba) 2025-08-29 14:45:12.222584 | orchestrator | 2025-08-29 14:45:12.222594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:12.222605 | orchestrator | Friday 29 August 2025 14:45:09 +0000 (0:00:00.572) 0:00:04.753 ********* 2025-08-29 14:45:12.222616 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:45:12.222626 | orchestrator | 2025-08-29 14:45:12.222637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.222648 | orchestrator | Friday 29 August 2025 14:45:10 +0000 (0:00:00.655) 0:00:05.409 ********* 2025-08-29 14:45:12.222658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:45:12.222669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:45:12.222680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:45:12.222690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:45:12.222715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:45:12.222727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:45:12.222737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:45:12.222748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:45:12.222759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:45:12.222770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:45:12.222780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:45:12.222791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:45:12.222806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:45:12.222817 | orchestrator | 2025-08-29 14:45:12.222828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.222838 | orchestrator | Friday 29 August 2025 14:45:10 +0000 (0:00:00.405) 0:00:05.814 ********* 2025-08-29 14:45:12.222849 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222860 | orchestrator | 2025-08-29 14:45:12.222871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.222881 | orchestrator | Friday 29 August 2025 14:45:10 +0000 (0:00:00.178) 0:00:05.993 ********* 2025-08-29 14:45:12.222892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222903 | orchestrator | 2025-08-29 14:45:12.222913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.222924 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.197) 0:00:06.191 ********* 2025-08-29 14:45:12.222935 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222945 | orchestrator | 2025-08-29 14:45:12.222956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.222967 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.217) 0:00:06.408 ********* 2025-08-29 14:45:12.222977 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.222988 | orchestrator | 2025-08-29 14:45:12.222999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.223009 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.176) 0:00:06.585 ********* 2025-08-29 14:45:12.223027 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.223037 | orchestrator | 2025-08-29 14:45:12.223048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.223059 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.180) 0:00:06.765 ********* 2025-08-29 14:45:12.223069 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.223080 | orchestrator | 2025-08-29 14:45:12.223091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.223101 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.200) 0:00:06.966 ********* 2025-08-29 14:45:12.223112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:12.223123 | orchestrator | 2025-08-29 14:45:12.223133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:12.223144 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.196) 0:00:07.162 ********* 2025-08-29 14:45:12.223161 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.268526 | orchestrator | 2025-08-29 14:45:20.268673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:20.268690 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.192) 0:00:07.355 ********* 2025-08-29 14:45:20.268702 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:45:20.268715 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:45:20.268727 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:45:20.268738 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:45:20.268749 | orchestrator | 2025-08-29 14:45:20.268761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:20.268772 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.889) 0:00:08.245 ********* 2025-08-29 14:45:20.268783 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.268795 | orchestrator | 2025-08-29 14:45:20.268806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:20.268816 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.182) 0:00:08.427 ********* 2025-08-29 14:45:20.268827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.268838 | orchestrator | 2025-08-29 14:45:20.268849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:20.268860 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.197) 0:00:08.624 ********* 2025-08-29 14:45:20.268871 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.268882 | orchestrator | 2025-08-29 14:45:20.268893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:20.268905 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.196) 0:00:08.821 ********* 2025-08-29 14:45:20.268916 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.268927 | orchestrator | 2025-08-29 14:45:20.268938 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:45:20.268948 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.206) 0:00:09.028 ********* 2025-08-29 14:45:20.268959 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.268970 | orchestrator | 2025-08-29 14:45:20.268981 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:45:20.268992 | orchestrator | Friday 29 August 2025 14:45:14 +0000 (0:00:00.136) 0:00:09.164 ********* 2025-08-29 14:45:20.269004 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95143370-f7d7-5ec5-ad3d-8af7ad027df9'}}) 2025-08-29 14:45:20.269015 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a5a082ef-4dec-5d63-a984-4d3e57643ca0'}}) 2025-08-29 14:45:20.269026 | orchestrator | 2025-08-29 14:45:20.269037 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:45:20.269048 | orchestrator | Friday 29 August 2025 14:45:14 +0000 (0:00:00.199) 0:00:09.364 ********* 2025-08-29 14:45:20.269060 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'}) 2025-08-29 14:45:20.269101 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'}) 2025-08-29 14:45:20.269112 | orchestrator | 2025-08-29 14:45:20.269123 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:45:20.269134 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:01.992) 0:00:11.357 ********* 2025-08-29 14:45:20.269146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269169 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269180 | orchestrator | 2025-08-29 14:45:20.269191 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:45:20.269224 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:00.167) 0:00:11.525 ********* 2025-08-29 14:45:20.269235 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'}) 2025-08-29 14:45:20.269246 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'}) 2025-08-29 14:45:20.269257 | orchestrator | 2025-08-29 14:45:20.269268 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:45:20.269279 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:01.571) 0:00:13.096 ********* 2025-08-29 14:45:20.269290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269313 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269324 | orchestrator | 2025-08-29 14:45:20.269335 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:45:20.269346 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.173) 0:00:13.270 ********* 2025-08-29 14:45:20.269357 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269368 | orchestrator | 2025-08-29 14:45:20.269379 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:45:20.269410 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.142) 0:00:13.413 ********* 2025-08-29 14:45:20.269421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269433 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269455 | orchestrator | 2025-08-29 14:45:20.269465 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:45:20.269476 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.460) 0:00:13.873 ********* 2025-08-29 14:45:20.269487 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269498 | orchestrator | 2025-08-29 14:45:20.269508 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:45:20.269519 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.146) 0:00:14.020 ********* 2025-08-29 14:45:20.269530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269561 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269571 | orchestrator | 2025-08-29 14:45:20.269582 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:45:20.269593 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.164) 0:00:14.184 ********* 2025-08-29 14:45:20.269604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269615 | orchestrator | 2025-08-29 14:45:20.269625 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:45:20.269636 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.139) 0:00:14.323 ********* 2025-08-29 14:45:20.269647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269658 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269669 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269680 | orchestrator | 2025-08-29 14:45:20.269691 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:45:20.269702 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.159) 0:00:14.483 ********* 2025-08-29 14:45:20.269713 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:20.269724 | orchestrator | 2025-08-29 14:45:20.269735 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:45:20.269746 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.150) 0:00:14.633 ********* 2025-08-29 14:45:20.269778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269806 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269817 | orchestrator | 2025-08-29 14:45:20.269828 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:45:20.269839 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.156) 0:00:14.790 ********* 2025-08-29 14:45:20.269850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269872 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269883 | orchestrator | 2025-08-29 14:45:20.269894 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:45:20.269905 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.151) 0:00:14.942 ********* 2025-08-29 14:45:20.269916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:20.269926 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:20.269937 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269948 | orchestrator | 2025-08-29 14:45:20.269959 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:45:20.269970 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.148) 0:00:15.090 ********* 2025-08-29 14:45:20.269981 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.269992 | orchestrator | 2025-08-29 14:45:20.270003 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:45:20.270085 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.148) 0:00:15.239 ********* 2025-08-29 14:45:20.270098 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:20.270109 | orchestrator | 2025-08-29 14:45:20.270126 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:45:27.226805 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.162) 0:00:15.401 ********* 2025-08-29 14:45:27.226917 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.226933 | orchestrator | 2025-08-29 14:45:27.226946 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:45:27.226957 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.179) 0:00:15.581 ********* 2025-08-29 14:45:27.226968 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:27.226980 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:45:27.226992 | orchestrator | } 2025-08-29 14:45:27.227003 | orchestrator | 2025-08-29 14:45:27.227014 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:45:27.227025 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.499) 0:00:16.081 ********* 2025-08-29 14:45:27.227036 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:27.227047 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:45:27.227058 | orchestrator | } 2025-08-29 14:45:27.227069 | orchestrator | 2025-08-29 14:45:27.227080 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:45:27.227091 | orchestrator | Friday 29 August 2025 14:45:21 +0000 (0:00:00.180) 0:00:16.261 ********* 2025-08-29 14:45:27.227102 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:27.227112 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:45:27.227124 | orchestrator | } 2025-08-29 14:45:27.227135 | orchestrator | 2025-08-29 14:45:27.227147 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:45:27.227159 | orchestrator | Friday 29 August 2025 14:45:21 +0000 (0:00:00.197) 0:00:16.459 ********* 2025-08-29 14:45:27.227170 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:27.227181 | orchestrator | 2025-08-29 14:45:27.227192 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:45:27.227202 | orchestrator | Friday 29 August 2025 14:45:22 +0000 (0:00:00.708) 0:00:17.167 ********* 2025-08-29 14:45:27.227239 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:27.227251 | orchestrator | 2025-08-29 14:45:27.227262 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:45:27.227272 | orchestrator | Friday 29 August 2025 14:45:22 +0000 (0:00:00.560) 0:00:17.727 ********* 2025-08-29 14:45:27.227283 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:27.227294 | orchestrator | 2025-08-29 14:45:27.227305 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:45:27.227317 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.549) 0:00:18.276 ********* 2025-08-29 14:45:27.227329 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:27.227341 | orchestrator | 2025-08-29 14:45:27.227354 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:45:27.227366 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.133) 0:00:18.409 ********* 2025-08-29 14:45:27.227378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227390 | orchestrator | 2025-08-29 14:45:27.227402 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:45:27.227414 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.129) 0:00:18.539 ********* 2025-08-29 14:45:27.227426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227437 | orchestrator | 2025-08-29 14:45:27.227450 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:45:27.227461 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.112) 0:00:18.652 ********* 2025-08-29 14:45:27.227473 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:27.227511 | orchestrator |  "vgs_report": { 2025-08-29 14:45:27.227524 | orchestrator |  "vg": [] 2025-08-29 14:45:27.227536 | orchestrator |  } 2025-08-29 14:45:27.227547 | orchestrator | } 2025-08-29 14:45:27.227575 | orchestrator | 2025-08-29 14:45:27.227587 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:45:27.227599 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.176) 0:00:18.829 ********* 2025-08-29 14:45:27.227611 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227622 | orchestrator | 2025-08-29 14:45:27.227634 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:45:27.227646 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.147) 0:00:18.976 ********* 2025-08-29 14:45:27.227658 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227670 | orchestrator | 2025-08-29 14:45:27.227681 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:45:27.227692 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.140) 0:00:19.117 ********* 2025-08-29 14:45:27.227702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227713 | orchestrator | 2025-08-29 14:45:27.227724 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:45:27.227734 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.395) 0:00:19.513 ********* 2025-08-29 14:45:27.227745 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227756 | orchestrator | 2025-08-29 14:45:27.227767 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:45:27.227777 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.140) 0:00:19.653 ********* 2025-08-29 14:45:27.227788 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227799 | orchestrator | 2025-08-29 14:45:27.227810 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:45:27.227821 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.147) 0:00:19.801 ********* 2025-08-29 14:45:27.227832 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227842 | orchestrator | 2025-08-29 14:45:27.227853 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:45:27.227864 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.144) 0:00:19.945 ********* 2025-08-29 14:45:27.227874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227885 | orchestrator | 2025-08-29 14:45:27.227896 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:45:27.227906 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.151) 0:00:20.096 ********* 2025-08-29 14:45:27.227917 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227928 | orchestrator | 2025-08-29 14:45:27.227939 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:45:27.227967 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.160) 0:00:20.257 ********* 2025-08-29 14:45:27.227979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.227990 | orchestrator | 2025-08-29 14:45:27.228001 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:45:27.228011 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.155) 0:00:20.413 ********* 2025-08-29 14:45:27.228022 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228033 | orchestrator | 2025-08-29 14:45:27.228043 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:45:27.228054 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.169) 0:00:20.583 ********* 2025-08-29 14:45:27.228065 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228076 | orchestrator | 2025-08-29 14:45:27.228086 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:45:27.228097 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.148) 0:00:20.732 ********* 2025-08-29 14:45:27.228108 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228119 | orchestrator | 2025-08-29 14:45:27.228129 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:45:27.228147 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.176) 0:00:20.908 ********* 2025-08-29 14:45:27.228158 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228169 | orchestrator | 2025-08-29 14:45:27.228180 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:45:27.228191 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.159) 0:00:21.068 ********* 2025-08-29 14:45:27.228202 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228231 | orchestrator | 2025-08-29 14:45:27.228242 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:45:27.228253 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.161) 0:00:21.229 ********* 2025-08-29 14:45:27.228266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:27.228279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:27.228290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228301 | orchestrator | 2025-08-29 14:45:27.228312 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:45:27.228323 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.161) 0:00:21.391 ********* 2025-08-29 14:45:27.228333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:27.228344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:27.228355 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228366 | orchestrator | 2025-08-29 14:45:27.228377 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:45:27.228388 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.350) 0:00:21.741 ********* 2025-08-29 14:45:27.228399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:27.228410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:27.228421 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228431 | orchestrator | 2025-08-29 14:45:27.228442 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:45:27.228453 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.165) 0:00:21.907 ********* 2025-08-29 14:45:27.228464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:27.228475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:27.228485 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228496 | orchestrator | 2025-08-29 14:45:27.228507 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:45:27.228518 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.155) 0:00:22.063 ********* 2025-08-29 14:45:27.228528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:27.228539 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:27.228550 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:27.228561 | orchestrator | 2025-08-29 14:45:27.228571 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:45:27.228589 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.142) 0:00:22.206 ********* 2025-08-29 14:45:27.228608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:27.228625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:33.064197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:33.064349 | orchestrator | 2025-08-29 14:45:33.064362 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:45:33.064371 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.154) 0:00:22.360 ********* 2025-08-29 14:45:33.064378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:33.064387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:33.064393 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:33.064400 | orchestrator | 2025-08-29 14:45:33.064407 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:45:33.064413 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.203) 0:00:22.564 ********* 2025-08-29 14:45:33.064420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:33.064426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:33.064432 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:33.064439 | orchestrator | 2025-08-29 14:45:33.064445 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:45:33.064452 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.174) 0:00:22.739 ********* 2025-08-29 14:45:33.064458 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:33.064465 | orchestrator | 2025-08-29 14:45:33.064472 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:45:33.064478 | orchestrator | Friday 29 August 2025 14:45:28 +0000 (0:00:00.554) 0:00:23.293 ********* 2025-08-29 14:45:33.064485 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:33.064491 | orchestrator | 2025-08-29 14:45:33.064497 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:45:33.064503 | orchestrator | Friday 29 August 2025 14:45:28 +0000 (0:00:00.547) 0:00:23.841 ********* 2025-08-29 14:45:33.064509 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:33.064516 | orchestrator | 2025-08-29 14:45:33.064522 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:45:33.064528 | orchestrator | Friday 29 August 2025 14:45:28 +0000 (0:00:00.142) 0:00:23.983 ********* 2025-08-29 14:45:33.064535 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'vg_name': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'}) 2025-08-29 14:45:33.064542 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'vg_name': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'}) 2025-08-29 14:45:33.064549 | orchestrator | 2025-08-29 14:45:33.064570 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:45:33.064576 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.173) 0:00:24.157 ********* 2025-08-29 14:45:33.064583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:33.064589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:33.064612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:33.064619 | orchestrator | 2025-08-29 14:45:33.064625 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:45:33.064631 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.172) 0:00:24.329 ********* 2025-08-29 14:45:33.064637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:33.064644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:33.064650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:33.064656 | orchestrator | 2025-08-29 14:45:33.064662 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:45:33.064669 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.343) 0:00:24.673 ********* 2025-08-29 14:45:33.064675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'})  2025-08-29 14:45:33.064682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'})  2025-08-29 14:45:33.064688 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:33.064694 | orchestrator | 2025-08-29 14:45:33.064701 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:45:33.064707 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.172) 0:00:24.845 ********* 2025-08-29 14:45:33.064713 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:33.064720 | orchestrator |  "lvm_report": { 2025-08-29 14:45:33.064726 | orchestrator |  "lv": [ 2025-08-29 14:45:33.064733 | orchestrator |  { 2025-08-29 14:45:33.064752 | orchestrator |  "lv_name": "osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9", 2025-08-29 14:45:33.064760 | orchestrator |  "vg_name": "ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9" 2025-08-29 14:45:33.064767 | orchestrator |  }, 2025-08-29 14:45:33.064774 | orchestrator |  { 2025-08-29 14:45:33.064781 | orchestrator |  "lv_name": "osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0", 2025-08-29 14:45:33.064788 | orchestrator |  "vg_name": "ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0" 2025-08-29 14:45:33.064795 | orchestrator |  } 2025-08-29 14:45:33.064802 | orchestrator |  ], 2025-08-29 14:45:33.064809 | orchestrator |  "pv": [ 2025-08-29 14:45:33.064816 | orchestrator |  { 2025-08-29 14:45:33.064823 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:45:33.064830 | orchestrator |  "vg_name": "ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9" 2025-08-29 14:45:33.064837 | orchestrator |  }, 2025-08-29 14:45:33.064844 | orchestrator |  { 2025-08-29 14:45:33.064851 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:45:33.064858 | orchestrator |  "vg_name": "ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0" 2025-08-29 14:45:33.064865 | orchestrator |  } 2025-08-29 14:45:33.064872 | orchestrator |  ] 2025-08-29 14:45:33.064879 | orchestrator |  } 2025-08-29 14:45:33.064887 | orchestrator | } 2025-08-29 14:45:33.064894 | orchestrator | 2025-08-29 14:45:33.064901 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:45:33.064908 | orchestrator | 2025-08-29 14:45:33.064916 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:45:33.064923 | orchestrator | Friday 29 August 2025 14:45:30 +0000 (0:00:00.299) 0:00:25.144 ********* 2025-08-29 14:45:33.064930 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:33.064937 | orchestrator | 2025-08-29 14:45:33.064949 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:45:33.064957 | orchestrator | Friday 29 August 2025 14:45:30 +0000 (0:00:00.302) 0:00:25.447 ********* 2025-08-29 14:45:33.064964 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:33.064971 | orchestrator | 2025-08-29 14:45:33.064978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.064985 | orchestrator | Friday 29 August 2025 14:45:30 +0000 (0:00:00.282) 0:00:25.730 ********* 2025-08-29 14:45:33.064992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:45:33.064999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:45:33.065006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:45:33.065013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:45:33.065020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:45:33.065027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:45:33.065034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:45:33.065041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:45:33.065052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:45:33.065059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:45:33.065066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:45:33.065073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:45:33.065080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:45:33.065087 | orchestrator | 2025-08-29 14:45:33.065094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065101 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.493) 0:00:26.224 ********* 2025-08-29 14:45:33.065108 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065115 | orchestrator | 2025-08-29 14:45:33.065122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065129 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.213) 0:00:26.437 ********* 2025-08-29 14:45:33.065136 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065143 | orchestrator | 2025-08-29 14:45:33.065150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065156 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.199) 0:00:26.637 ********* 2025-08-29 14:45:33.065163 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065169 | orchestrator | 2025-08-29 14:45:33.065175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065181 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.230) 0:00:26.868 ********* 2025-08-29 14:45:33.065188 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065194 | orchestrator | 2025-08-29 14:45:33.065200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065206 | orchestrator | Friday 29 August 2025 14:45:32 +0000 (0:00:00.670) 0:00:27.538 ********* 2025-08-29 14:45:33.065229 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065237 | orchestrator | 2025-08-29 14:45:33.065247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065257 | orchestrator | Friday 29 August 2025 14:45:32 +0000 (0:00:00.204) 0:00:27.743 ********* 2025-08-29 14:45:33.065266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065277 | orchestrator | 2025-08-29 14:45:33.065287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:33.065305 | orchestrator | Friday 29 August 2025 14:45:32 +0000 (0:00:00.231) 0:00:27.975 ********* 2025-08-29 14:45:33.065316 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:33.065322 | orchestrator | 2025-08-29 14:45:33.065334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:43.881724 | orchestrator | Friday 29 August 2025 14:45:33 +0000 (0:00:00.206) 0:00:28.181 ********* 2025-08-29 14:45:43.881814 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.881821 | orchestrator | 2025-08-29 14:45:43.881826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:43.881831 | orchestrator | Friday 29 August 2025 14:45:33 +0000 (0:00:00.268) 0:00:28.450 ********* 2025-08-29 14:45:43.881836 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd) 2025-08-29 14:45:43.881842 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd) 2025-08-29 14:45:43.881846 | orchestrator | 2025-08-29 14:45:43.881850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:43.881855 | orchestrator | Friday 29 August 2025 14:45:33 +0000 (0:00:00.478) 0:00:28.929 ********* 2025-08-29 14:45:43.881859 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8) 2025-08-29 14:45:43.881862 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8) 2025-08-29 14:45:43.881866 | orchestrator | 2025-08-29 14:45:43.881870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:43.881874 | orchestrator | Friday 29 August 2025 14:45:34 +0000 (0:00:00.586) 0:00:29.515 ********* 2025-08-29 14:45:43.881878 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048) 2025-08-29 14:45:43.881882 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048) 2025-08-29 14:45:43.881885 | orchestrator | 2025-08-29 14:45:43.881889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:43.881893 | orchestrator | Friday 29 August 2025 14:45:34 +0000 (0:00:00.448) 0:00:29.963 ********* 2025-08-29 14:45:43.881897 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008) 2025-08-29 14:45:43.881901 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008) 2025-08-29 14:45:43.881905 | orchestrator | 2025-08-29 14:45:43.881908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:43.881912 | orchestrator | Friday 29 August 2025 14:45:35 +0000 (0:00:00.461) 0:00:30.424 ********* 2025-08-29 14:45:43.881916 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:45:43.881920 | orchestrator | 2025-08-29 14:45:43.881924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.881928 | orchestrator | Friday 29 August 2025 14:45:35 +0000 (0:00:00.340) 0:00:30.764 ********* 2025-08-29 14:45:43.881931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:45:43.881937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:45:43.881941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:45:43.881945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:45:43.881949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:45:43.881953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:45:43.881964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:45:43.881984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:45:43.881988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:45:43.881992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:45:43.881996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:45:43.882000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:45:43.882004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:45:43.882007 | orchestrator | 2025-08-29 14:45:43.882011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882047 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.631) 0:00:31.396 ********* 2025-08-29 14:45:43.882051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882055 | orchestrator | 2025-08-29 14:45:43.882058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882062 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.212) 0:00:31.609 ********* 2025-08-29 14:45:43.882066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882070 | orchestrator | 2025-08-29 14:45:43.882074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882078 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.188) 0:00:31.797 ********* 2025-08-29 14:45:43.882082 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882085 | orchestrator | 2025-08-29 14:45:43.882089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882093 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.241) 0:00:32.039 ********* 2025-08-29 14:45:43.882096 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882100 | orchestrator | 2025-08-29 14:45:43.882118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882122 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.199) 0:00:32.238 ********* 2025-08-29 14:45:43.882126 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882130 | orchestrator | 2025-08-29 14:45:43.882134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882138 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.184) 0:00:32.422 ********* 2025-08-29 14:45:43.882141 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882145 | orchestrator | 2025-08-29 14:45:43.882149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882153 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.224) 0:00:32.647 ********* 2025-08-29 14:45:43.882156 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882160 | orchestrator | 2025-08-29 14:45:43.882164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882167 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.227) 0:00:32.874 ********* 2025-08-29 14:45:43.882171 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882175 | orchestrator | 2025-08-29 14:45:43.882179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882182 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.234) 0:00:33.109 ********* 2025-08-29 14:45:43.882186 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:45:43.882190 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:45:43.882194 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:45:43.882198 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:45:43.882202 | orchestrator | 2025-08-29 14:45:43.882205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882210 | orchestrator | Friday 29 August 2025 14:45:38 +0000 (0:00:00.944) 0:00:34.053 ********* 2025-08-29 14:45:43.882248 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882253 | orchestrator | 2025-08-29 14:45:43.882257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882261 | orchestrator | Friday 29 August 2025 14:45:39 +0000 (0:00:00.212) 0:00:34.266 ********* 2025-08-29 14:45:43.882265 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882269 | orchestrator | 2025-08-29 14:45:43.882274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882278 | orchestrator | Friday 29 August 2025 14:45:39 +0000 (0:00:00.198) 0:00:34.465 ********* 2025-08-29 14:45:43.882282 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882286 | orchestrator | 2025-08-29 14:45:43.882290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:43.882295 | orchestrator | Friday 29 August 2025 14:45:39 +0000 (0:00:00.646) 0:00:35.111 ********* 2025-08-29 14:45:43.882299 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882303 | orchestrator | 2025-08-29 14:45:43.882307 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:45:43.882311 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.222) 0:00:35.334 ********* 2025-08-29 14:45:43.882316 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882320 | orchestrator | 2025-08-29 14:45:43.882326 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:45:43.882331 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.158) 0:00:35.493 ********* 2025-08-29 14:45:43.882335 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}}) 2025-08-29 14:45:43.882340 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}}) 2025-08-29 14:45:43.882344 | orchestrator | 2025-08-29 14:45:43.882348 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:45:43.882352 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.185) 0:00:35.678 ********* 2025-08-29 14:45:43.882358 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}) 2025-08-29 14:45:43.882364 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}) 2025-08-29 14:45:43.882369 | orchestrator | 2025-08-29 14:45:43.882373 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:45:43.882377 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:01.836) 0:00:37.515 ********* 2025-08-29 14:45:43.882382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:43.882388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:43.882392 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:43.882396 | orchestrator | 2025-08-29 14:45:43.882400 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:45:43.882404 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.150) 0:00:37.666 ********* 2025-08-29 14:45:43.882408 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}) 2025-08-29 14:45:43.882413 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}) 2025-08-29 14:45:43.882417 | orchestrator | 2025-08-29 14:45:43.882424 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:45:49.480329 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:01.347) 0:00:39.013 ********* 2025-08-29 14:45:49.480495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.480513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.480524 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480536 | orchestrator | 2025-08-29 14:45:49.480549 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:45:49.480560 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.151) 0:00:39.165 ********* 2025-08-29 14:45:49.480570 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480581 | orchestrator | 2025-08-29 14:45:49.480592 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:45:49.480603 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.145) 0:00:39.311 ********* 2025-08-29 14:45:49.480615 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.480626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.480636 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480647 | orchestrator | 2025-08-29 14:45:49.480658 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:45:49.480669 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.157) 0:00:39.469 ********* 2025-08-29 14:45:49.480679 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480690 | orchestrator | 2025-08-29 14:45:49.480701 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:45:49.480712 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.142) 0:00:39.611 ********* 2025-08-29 14:45:49.480723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.480733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.480744 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480755 | orchestrator | 2025-08-29 14:45:49.480766 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:45:49.480777 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.155) 0:00:39.767 ********* 2025-08-29 14:45:49.480789 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480801 | orchestrator | 2025-08-29 14:45:49.480832 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:45:49.480844 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.322) 0:00:40.090 ********* 2025-08-29 14:45:49.480857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.480869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.480882 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.480894 | orchestrator | 2025-08-29 14:45:49.480906 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:45:49.480918 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.152) 0:00:40.243 ********* 2025-08-29 14:45:49.480930 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:49.480942 | orchestrator | 2025-08-29 14:45:49.480954 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:45:49.480966 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.195) 0:00:40.438 ********* 2025-08-29 14:45:49.480986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.480999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.481012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481024 | orchestrator | 2025-08-29 14:45:49.481036 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:45:49.481048 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.152) 0:00:40.590 ********* 2025-08-29 14:45:49.481060 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.481072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.481084 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481096 | orchestrator | 2025-08-29 14:45:49.481109 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:45:49.481121 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.161) 0:00:40.751 ********* 2025-08-29 14:45:49.481155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:49.481166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:49.481177 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481188 | orchestrator | 2025-08-29 14:45:49.481199 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:45:49.481210 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.157) 0:00:40.909 ********* 2025-08-29 14:45:49.481246 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481268 | orchestrator | 2025-08-29 14:45:49.481286 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:45:49.481302 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.143) 0:00:41.052 ********* 2025-08-29 14:45:49.481319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481337 | orchestrator | 2025-08-29 14:45:49.481349 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:45:49.481360 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.136) 0:00:41.189 ********* 2025-08-29 14:45:49.481370 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481381 | orchestrator | 2025-08-29 14:45:49.481391 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:45:49.481402 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.143) 0:00:41.333 ********* 2025-08-29 14:45:49.481413 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:49.481424 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:45:49.481435 | orchestrator | } 2025-08-29 14:45:49.481446 | orchestrator | 2025-08-29 14:45:49.481456 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:45:49.481467 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.149) 0:00:41.482 ********* 2025-08-29 14:45:49.481477 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:49.481488 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:45:49.481499 | orchestrator | } 2025-08-29 14:45:49.481509 | orchestrator | 2025-08-29 14:45:49.481520 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:45:49.481531 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.136) 0:00:41.619 ********* 2025-08-29 14:45:49.481541 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:49.481552 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:45:49.481564 | orchestrator | } 2025-08-29 14:45:49.481582 | orchestrator | 2025-08-29 14:45:49.481593 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:45:49.481603 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.135) 0:00:41.755 ********* 2025-08-29 14:45:49.481614 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:49.481625 | orchestrator | 2025-08-29 14:45:49.481636 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:45:49.481647 | orchestrator | Friday 29 August 2025 14:45:47 +0000 (0:00:00.734) 0:00:42.489 ********* 2025-08-29 14:45:49.481657 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:49.481668 | orchestrator | 2025-08-29 14:45:49.481679 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:45:49.481690 | orchestrator | Friday 29 August 2025 14:45:47 +0000 (0:00:00.548) 0:00:43.038 ********* 2025-08-29 14:45:49.481701 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:49.481711 | orchestrator | 2025-08-29 14:45:49.481722 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:45:49.481733 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.536) 0:00:43.574 ********* 2025-08-29 14:45:49.481744 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:49.481754 | orchestrator | 2025-08-29 14:45:49.481765 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:45:49.481776 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.135) 0:00:43.709 ********* 2025-08-29 14:45:49.481786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481797 | orchestrator | 2025-08-29 14:45:49.481808 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:45:49.481827 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.107) 0:00:43.816 ********* 2025-08-29 14:45:49.481838 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481849 | orchestrator | 2025-08-29 14:45:49.481860 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:45:49.481870 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.106) 0:00:43.923 ********* 2025-08-29 14:45:49.481881 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:49.481892 | orchestrator |  "vgs_report": { 2025-08-29 14:45:49.481903 | orchestrator |  "vg": [] 2025-08-29 14:45:49.481914 | orchestrator |  } 2025-08-29 14:45:49.481925 | orchestrator | } 2025-08-29 14:45:49.481935 | orchestrator | 2025-08-29 14:45:49.481946 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:45:49.481957 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.140) 0:00:44.063 ********* 2025-08-29 14:45:49.481968 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.481978 | orchestrator | 2025-08-29 14:45:49.481989 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:45:49.482000 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.144) 0:00:44.207 ********* 2025-08-29 14:45:49.482011 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.482087 | orchestrator | 2025-08-29 14:45:49.482099 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:45:49.482109 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.135) 0:00:44.342 ********* 2025-08-29 14:45:49.482120 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.482131 | orchestrator | 2025-08-29 14:45:49.482142 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:45:49.482153 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.134) 0:00:44.476 ********* 2025-08-29 14:45:49.482164 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:49.482174 | orchestrator | 2025-08-29 14:45:49.482185 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:45:49.482204 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.134) 0:00:44.611 ********* 2025-08-29 14:45:54.289474 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290332 | orchestrator | 2025-08-29 14:45:54.290367 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:45:54.290409 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.138) 0:00:44.750 ********* 2025-08-29 14:45:54.290420 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290431 | orchestrator | 2025-08-29 14:45:54.290443 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:45:54.290453 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.334) 0:00:45.084 ********* 2025-08-29 14:45:54.290464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290475 | orchestrator | 2025-08-29 14:45:54.290486 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:45:54.290497 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.147) 0:00:45.232 ********* 2025-08-29 14:45:54.290508 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290518 | orchestrator | 2025-08-29 14:45:54.290529 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:45:54.290540 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.143) 0:00:45.376 ********* 2025-08-29 14:45:54.290551 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290561 | orchestrator | 2025-08-29 14:45:54.290572 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:45:54.290583 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.138) 0:00:45.514 ********* 2025-08-29 14:45:54.290594 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290605 | orchestrator | 2025-08-29 14:45:54.290615 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:45:54.290627 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.147) 0:00:45.662 ********* 2025-08-29 14:45:54.290637 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290648 | orchestrator | 2025-08-29 14:45:54.290658 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:45:54.290669 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.132) 0:00:45.794 ********* 2025-08-29 14:45:54.290680 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290690 | orchestrator | 2025-08-29 14:45:54.290701 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:45:54.290712 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.144) 0:00:45.938 ********* 2025-08-29 14:45:54.290722 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290733 | orchestrator | 2025-08-29 14:45:54.290743 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:45:54.290754 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.143) 0:00:46.082 ********* 2025-08-29 14:45:54.290765 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290775 | orchestrator | 2025-08-29 14:45:54.290786 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:45:54.290797 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.144) 0:00:46.227 ********* 2025-08-29 14:45:54.290827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.290841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.290852 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290864 | orchestrator | 2025-08-29 14:45:54.290874 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:45:54.290885 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.164) 0:00:46.391 ********* 2025-08-29 14:45:54.290896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.290907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.290928 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.290947 | orchestrator | 2025-08-29 14:45:54.290966 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:45:54.290982 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.146) 0:00:46.537 ********* 2025-08-29 14:45:54.290999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.291017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.291035 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.291051 | orchestrator | 2025-08-29 14:45:54.291071 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:45:54.291087 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.151) 0:00:46.689 ********* 2025-08-29 14:45:54.291102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.291119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.291139 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.291159 | orchestrator | 2025-08-29 14:45:54.291180 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:45:54.291255 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.358) 0:00:47.047 ********* 2025-08-29 14:45:54.291278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.291296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.291312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.291329 | orchestrator | 2025-08-29 14:45:54.291346 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:45:54.291364 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.166) 0:00:47.213 ********* 2025-08-29 14:45:54.291380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.291398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.291416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.291434 | orchestrator | 2025-08-29 14:45:54.291456 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:45:54.291476 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.152) 0:00:47.366 ********* 2025-08-29 14:45:54.291494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.291512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.291605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.291624 | orchestrator | 2025-08-29 14:45:54.291642 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:45:54.291661 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.160) 0:00:47.527 ********* 2025-08-29 14:45:54.291726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.291747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.291781 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.291800 | orchestrator | 2025-08-29 14:45:54.291818 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:45:54.291846 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.155) 0:00:47.682 ********* 2025-08-29 14:45:54.291866 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:54.291885 | orchestrator | 2025-08-29 14:45:54.291904 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:45:54.291922 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.554) 0:00:48.236 ********* 2025-08-29 14:45:54.291940 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:54.291958 | orchestrator | 2025-08-29 14:45:54.291976 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:45:54.291995 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.542) 0:00:48.779 ********* 2025-08-29 14:45:54.292013 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:54.292032 | orchestrator | 2025-08-29 14:45:54.292049 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:45:54.292068 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.148) 0:00:48.928 ********* 2025-08-29 14:45:54.292086 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'vg_name': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}) 2025-08-29 14:45:54.292105 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'vg_name': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}) 2025-08-29 14:45:54.292124 | orchestrator | 2025-08-29 14:45:54.292142 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:45:54.292159 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.178) 0:00:49.106 ********* 2025-08-29 14:45:54.292178 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.292197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.292215 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:54.292258 | orchestrator | 2025-08-29 14:45:54.292276 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:45:54.292295 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.159) 0:00:49.266 ********* 2025-08-29 14:45:54.292312 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:45:54.292329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:45:54.292360 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:00.370713 | orchestrator | 2025-08-29 14:46:00.370846 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:46:00.370863 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.156) 0:00:49.422 ********* 2025-08-29 14:46:00.370876 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'})  2025-08-29 14:46:00.370890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'})  2025-08-29 14:46:00.370901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:00.370914 | orchestrator | 2025-08-29 14:46:00.370926 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:46:00.370937 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.153) 0:00:49.576 ********* 2025-08-29 14:46:00.370978 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:46:00.370990 | orchestrator |  "lvm_report": { 2025-08-29 14:46:00.371002 | orchestrator |  "lv": [ 2025-08-29 14:46:00.371014 | orchestrator |  { 2025-08-29 14:46:00.371026 | orchestrator |  "lv_name": "osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12", 2025-08-29 14:46:00.371038 | orchestrator |  "vg_name": "ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12" 2025-08-29 14:46:00.371048 | orchestrator |  }, 2025-08-29 14:46:00.371059 | orchestrator |  { 2025-08-29 14:46:00.371070 | orchestrator |  "lv_name": "osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63", 2025-08-29 14:46:00.371081 | orchestrator |  "vg_name": "ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63" 2025-08-29 14:46:00.371092 | orchestrator |  } 2025-08-29 14:46:00.371103 | orchestrator |  ], 2025-08-29 14:46:00.371113 | orchestrator |  "pv": [ 2025-08-29 14:46:00.371124 | orchestrator |  { 2025-08-29 14:46:00.371135 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:46:00.371147 | orchestrator |  "vg_name": "ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12" 2025-08-29 14:46:00.371157 | orchestrator |  }, 2025-08-29 14:46:00.371168 | orchestrator |  { 2025-08-29 14:46:00.371179 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:46:00.371190 | orchestrator |  "vg_name": "ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63" 2025-08-29 14:46:00.371201 | orchestrator |  } 2025-08-29 14:46:00.371212 | orchestrator |  ] 2025-08-29 14:46:00.371222 | orchestrator |  } 2025-08-29 14:46:00.371260 | orchestrator | } 2025-08-29 14:46:00.371274 | orchestrator | 2025-08-29 14:46:00.371286 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:46:00.371298 | orchestrator | 2025-08-29 14:46:00.371311 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:46:00.371323 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.503) 0:00:50.079 ********* 2025-08-29 14:46:00.371335 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:46:00.371348 | orchestrator | 2025-08-29 14:46:00.371361 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:46:00.371374 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.241) 0:00:50.321 ********* 2025-08-29 14:46:00.371387 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:00.371399 | orchestrator | 2025-08-29 14:46:00.371413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371425 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.229) 0:00:50.551 ********* 2025-08-29 14:46:00.371437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:46:00.371449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:46:00.371461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:46:00.371473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:46:00.371485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:46:00.371498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:46:00.371510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:46:00.371522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:46:00.371534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:46:00.371546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:46:00.371558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:46:00.371581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:46:00.371593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:46:00.371604 | orchestrator | 2025-08-29 14:46:00.371615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371626 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.390) 0:00:50.942 ********* 2025-08-29 14:46:00.371637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371648 | orchestrator | 2025-08-29 14:46:00.371664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371675 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.206) 0:00:51.148 ********* 2025-08-29 14:46:00.371686 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371697 | orchestrator | 2025-08-29 14:46:00.371708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371738 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.192) 0:00:51.340 ********* 2025-08-29 14:46:00.371749 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371760 | orchestrator | 2025-08-29 14:46:00.371771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371781 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.192) 0:00:51.532 ********* 2025-08-29 14:46:00.371792 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371803 | orchestrator | 2025-08-29 14:46:00.371814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371824 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.212) 0:00:51.744 ********* 2025-08-29 14:46:00.371891 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371904 | orchestrator | 2025-08-29 14:46:00.371915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371926 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.191) 0:00:51.936 ********* 2025-08-29 14:46:00.371937 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371948 | orchestrator | 2025-08-29 14:46:00.371958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.371969 | orchestrator | Friday 29 August 2025 14:45:57 +0000 (0:00:00.672) 0:00:52.608 ********* 2025-08-29 14:46:00.371980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.371991 | orchestrator | 2025-08-29 14:46:00.372002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.372013 | orchestrator | Friday 29 August 2025 14:45:57 +0000 (0:00:00.217) 0:00:52.826 ********* 2025-08-29 14:46:00.372024 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:00.372035 | orchestrator | 2025-08-29 14:46:00.372046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.372056 | orchestrator | Friday 29 August 2025 14:45:57 +0000 (0:00:00.201) 0:00:53.027 ********* 2025-08-29 14:46:00.372067 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0) 2025-08-29 14:46:00.372080 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0) 2025-08-29 14:46:00.372091 | orchestrator | 2025-08-29 14:46:00.372102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.372113 | orchestrator | Friday 29 August 2025 14:45:58 +0000 (0:00:00.408) 0:00:53.436 ********* 2025-08-29 14:46:00.372124 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166) 2025-08-29 14:46:00.372135 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166) 2025-08-29 14:46:00.372146 | orchestrator | 2025-08-29 14:46:00.372156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.372168 | orchestrator | Friday 29 August 2025 14:45:58 +0000 (0:00:00.414) 0:00:53.851 ********* 2025-08-29 14:46:00.372183 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf) 2025-08-29 14:46:00.372202 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf) 2025-08-29 14:46:00.372213 | orchestrator | 2025-08-29 14:46:00.372224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.372251 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.435) 0:00:54.287 ********* 2025-08-29 14:46:00.372262 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a) 2025-08-29 14:46:00.372273 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a) 2025-08-29 14:46:00.372284 | orchestrator | 2025-08-29 14:46:00.372295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:00.372305 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.469) 0:00:54.756 ********* 2025-08-29 14:46:00.372316 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:46:00.372327 | orchestrator | 2025-08-29 14:46:00.372338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:00.372349 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.339) 0:00:55.096 ********* 2025-08-29 14:46:00.372359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:46:00.372370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:46:00.372381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:46:00.372391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:46:00.372402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:46:00.372413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:46:00.372424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:46:00.372434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:46:00.372445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:46:00.372456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:46:00.372466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:46:00.372483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:46:09.498466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:46:09.498576 | orchestrator | 2025-08-29 14:46:09.498591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498605 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.402) 0:00:55.498 ********* 2025-08-29 14:46:09.498617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498629 | orchestrator | 2025-08-29 14:46:09.498641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498652 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.187) 0:00:55.686 ********* 2025-08-29 14:46:09.498662 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498673 | orchestrator | 2025-08-29 14:46:09.498684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498695 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.200) 0:00:55.887 ********* 2025-08-29 14:46:09.498706 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498717 | orchestrator | 2025-08-29 14:46:09.498728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498739 | orchestrator | Friday 29 August 2025 14:46:01 +0000 (0:00:00.609) 0:00:56.496 ********* 2025-08-29 14:46:09.498774 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498785 | orchestrator | 2025-08-29 14:46:09.498796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498807 | orchestrator | Friday 29 August 2025 14:46:01 +0000 (0:00:00.180) 0:00:56.677 ********* 2025-08-29 14:46:09.498817 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498828 | orchestrator | 2025-08-29 14:46:09.498839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498849 | orchestrator | Friday 29 August 2025 14:46:01 +0000 (0:00:00.250) 0:00:56.928 ********* 2025-08-29 14:46:09.498860 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498871 | orchestrator | 2025-08-29 14:46:09.498881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498892 | orchestrator | Friday 29 August 2025 14:46:02 +0000 (0:00:00.225) 0:00:57.153 ********* 2025-08-29 14:46:09.498903 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498914 | orchestrator | 2025-08-29 14:46:09.498924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498935 | orchestrator | Friday 29 August 2025 14:46:02 +0000 (0:00:00.235) 0:00:57.389 ********* 2025-08-29 14:46:09.498946 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.498956 | orchestrator | 2025-08-29 14:46:09.498967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.498978 | orchestrator | Friday 29 August 2025 14:46:02 +0000 (0:00:00.208) 0:00:57.597 ********* 2025-08-29 14:46:09.498988 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:46:09.499001 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:46:09.499013 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:46:09.499042 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:46:09.499055 | orchestrator | 2025-08-29 14:46:09.499068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.499080 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.646) 0:00:58.243 ********* 2025-08-29 14:46:09.499090 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499101 | orchestrator | 2025-08-29 14:46:09.499112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.499123 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.221) 0:00:58.465 ********* 2025-08-29 14:46:09.499133 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499144 | orchestrator | 2025-08-29 14:46:09.499155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.499166 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.228) 0:00:58.694 ********* 2025-08-29 14:46:09.499177 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499187 | orchestrator | 2025-08-29 14:46:09.499198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:09.499208 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.256) 0:00:58.950 ********* 2025-08-29 14:46:09.499219 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499230 | orchestrator | 2025-08-29 14:46:09.499300 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:46:09.499312 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:00.213) 0:00:59.163 ********* 2025-08-29 14:46:09.499322 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499333 | orchestrator | 2025-08-29 14:46:09.499344 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:46:09.499354 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:00.368) 0:00:59.532 ********* 2025-08-29 14:46:09.499365 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf1413fe-a30b-500c-b995-d4125007de3c'}}) 2025-08-29 14:46:09.499377 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}}) 2025-08-29 14:46:09.499397 | orchestrator | 2025-08-29 14:46:09.499408 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:46:09.499419 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:00.193) 0:00:59.725 ********* 2025-08-29 14:46:09.499431 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'}) 2025-08-29 14:46:09.499443 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}) 2025-08-29 14:46:09.499454 | orchestrator | 2025-08-29 14:46:09.499465 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:46:09.499493 | orchestrator | Friday 29 August 2025 14:46:06 +0000 (0:00:01.915) 0:01:01.640 ********* 2025-08-29 14:46:09.499504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:09.499517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:09.499528 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499538 | orchestrator | 2025-08-29 14:46:09.499549 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:46:09.499560 | orchestrator | Friday 29 August 2025 14:46:06 +0000 (0:00:00.156) 0:01:01.797 ********* 2025-08-29 14:46:09.499570 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'}) 2025-08-29 14:46:09.499581 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}) 2025-08-29 14:46:09.499592 | orchestrator | 2025-08-29 14:46:09.499603 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:46:09.499614 | orchestrator | Friday 29 August 2025 14:46:07 +0000 (0:00:01.332) 0:01:03.130 ********* 2025-08-29 14:46:09.499625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:09.499636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:09.499647 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499657 | orchestrator | 2025-08-29 14:46:09.499668 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:46:09.499679 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.172) 0:01:03.302 ********* 2025-08-29 14:46:09.499689 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499700 | orchestrator | 2025-08-29 14:46:09.499710 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:46:09.499721 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.133) 0:01:03.436 ********* 2025-08-29 14:46:09.499732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:09.499749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:09.499760 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499771 | orchestrator | 2025-08-29 14:46:09.499781 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:46:09.499792 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.148) 0:01:03.585 ********* 2025-08-29 14:46:09.499803 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499813 | orchestrator | 2025-08-29 14:46:09.499824 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:46:09.499842 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.143) 0:01:03.729 ********* 2025-08-29 14:46:09.499853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:09.499864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:09.499875 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499885 | orchestrator | 2025-08-29 14:46:09.499896 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:46:09.499907 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.164) 0:01:03.893 ********* 2025-08-29 14:46:09.499918 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499929 | orchestrator | 2025-08-29 14:46:09.499939 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:46:09.499950 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.140) 0:01:04.034 ********* 2025-08-29 14:46:09.499961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:09.499972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:09.499982 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:09.499993 | orchestrator | 2025-08-29 14:46:09.500003 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:46:09.500014 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.142) 0:01:04.176 ********* 2025-08-29 14:46:09.500025 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:09.500036 | orchestrator | 2025-08-29 14:46:09.500046 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:46:09.500057 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.120) 0:01:04.297 ********* 2025-08-29 14:46:09.500074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:15.772707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:15.772839 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.772856 | orchestrator | 2025-08-29 14:46:15.772869 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:46:15.772882 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.337) 0:01:04.634 ********* 2025-08-29 14:46:15.772893 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:15.772905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:15.772916 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.772927 | orchestrator | 2025-08-29 14:46:15.772939 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:46:15.772951 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.160) 0:01:04.795 ********* 2025-08-29 14:46:15.772962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:15.772974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:15.772985 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.772996 | orchestrator | 2025-08-29 14:46:15.773037 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:46:15.773049 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.161) 0:01:04.956 ********* 2025-08-29 14:46:15.773060 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773071 | orchestrator | 2025-08-29 14:46:15.773081 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:46:15.773092 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.145) 0:01:05.102 ********* 2025-08-29 14:46:15.773103 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773114 | orchestrator | 2025-08-29 14:46:15.773125 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:46:15.773136 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.154) 0:01:05.257 ********* 2025-08-29 14:46:15.773146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773157 | orchestrator | 2025-08-29 14:46:15.773168 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:46:15.773179 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.136) 0:01:05.393 ********* 2025-08-29 14:46:15.773190 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:46:15.773201 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:46:15.773213 | orchestrator | } 2025-08-29 14:46:15.773226 | orchestrator | 2025-08-29 14:46:15.773271 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:46:15.773292 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.152) 0:01:05.546 ********* 2025-08-29 14:46:15.773313 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:46:15.773334 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:46:15.773354 | orchestrator | } 2025-08-29 14:46:15.773367 | orchestrator | 2025-08-29 14:46:15.773380 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:46:15.773392 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.155) 0:01:05.701 ********* 2025-08-29 14:46:15.773405 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:46:15.773417 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:46:15.773430 | orchestrator | } 2025-08-29 14:46:15.773442 | orchestrator | 2025-08-29 14:46:15.773454 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:46:15.773466 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.146) 0:01:05.847 ********* 2025-08-29 14:46:15.773479 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:15.773491 | orchestrator | 2025-08-29 14:46:15.773503 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:46:15.773516 | orchestrator | Friday 29 August 2025 14:46:11 +0000 (0:00:00.512) 0:01:06.360 ********* 2025-08-29 14:46:15.773528 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:15.773540 | orchestrator | 2025-08-29 14:46:15.773552 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:46:15.773564 | orchestrator | Friday 29 August 2025 14:46:11 +0000 (0:00:00.542) 0:01:06.902 ********* 2025-08-29 14:46:15.773576 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:15.773587 | orchestrator | 2025-08-29 14:46:15.773598 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:46:15.773609 | orchestrator | Friday 29 August 2025 14:46:12 +0000 (0:00:00.531) 0:01:07.434 ********* 2025-08-29 14:46:15.773620 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:15.773631 | orchestrator | 2025-08-29 14:46:15.773642 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:46:15.773653 | orchestrator | Friday 29 August 2025 14:46:12 +0000 (0:00:00.387) 0:01:07.821 ********* 2025-08-29 14:46:15.773664 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773675 | orchestrator | 2025-08-29 14:46:15.773686 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:46:15.773697 | orchestrator | Friday 29 August 2025 14:46:12 +0000 (0:00:00.108) 0:01:07.929 ********* 2025-08-29 14:46:15.773708 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773728 | orchestrator | 2025-08-29 14:46:15.773739 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:46:15.773774 | orchestrator | Friday 29 August 2025 14:46:12 +0000 (0:00:00.127) 0:01:08.057 ********* 2025-08-29 14:46:15.773786 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:46:15.773797 | orchestrator |  "vgs_report": { 2025-08-29 14:46:15.773808 | orchestrator |  "vg": [] 2025-08-29 14:46:15.773841 | orchestrator |  } 2025-08-29 14:46:15.773853 | orchestrator | } 2025-08-29 14:46:15.773864 | orchestrator | 2025-08-29 14:46:15.773875 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:46:15.773886 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.161) 0:01:08.218 ********* 2025-08-29 14:46:15.773897 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773908 | orchestrator | 2025-08-29 14:46:15.773919 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:46:15.773930 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.148) 0:01:08.367 ********* 2025-08-29 14:46:15.773941 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773952 | orchestrator | 2025-08-29 14:46:15.773963 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:46:15.773974 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.133) 0:01:08.500 ********* 2025-08-29 14:46:15.773985 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.773995 | orchestrator | 2025-08-29 14:46:15.774007 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:46:15.774071 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.150) 0:01:08.650 ********* 2025-08-29 14:46:15.774083 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774094 | orchestrator | 2025-08-29 14:46:15.774105 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:46:15.774116 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.154) 0:01:08.805 ********* 2025-08-29 14:46:15.774127 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774138 | orchestrator | 2025-08-29 14:46:15.774148 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:46:15.774159 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.155) 0:01:08.960 ********* 2025-08-29 14:46:15.774170 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774181 | orchestrator | 2025-08-29 14:46:15.774192 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:46:15.774203 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.137) 0:01:09.098 ********* 2025-08-29 14:46:15.774214 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774225 | orchestrator | 2025-08-29 14:46:15.774260 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:46:15.774281 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.130) 0:01:09.229 ********* 2025-08-29 14:46:15.774300 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774318 | orchestrator | 2025-08-29 14:46:15.774334 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:46:15.774345 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.167) 0:01:09.396 ********* 2025-08-29 14:46:15.774356 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774367 | orchestrator | 2025-08-29 14:46:15.774378 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:46:15.774389 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.362) 0:01:09.758 ********* 2025-08-29 14:46:15.774406 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774418 | orchestrator | 2025-08-29 14:46:15.774428 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:46:15.774439 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.148) 0:01:09.907 ********* 2025-08-29 14:46:15.774450 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774461 | orchestrator | 2025-08-29 14:46:15.774472 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:46:15.774492 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.128) 0:01:10.036 ********* 2025-08-29 14:46:15.774503 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774513 | orchestrator | 2025-08-29 14:46:15.774525 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:46:15.774535 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.128) 0:01:10.164 ********* 2025-08-29 14:46:15.774546 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774557 | orchestrator | 2025-08-29 14:46:15.774568 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:46:15.774579 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.139) 0:01:10.304 ********* 2025-08-29 14:46:15.774590 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774600 | orchestrator | 2025-08-29 14:46:15.774611 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:46:15.774622 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.143) 0:01:10.447 ********* 2025-08-29 14:46:15.774633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:15.774644 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:15.774655 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774666 | orchestrator | 2025-08-29 14:46:15.774677 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:46:15.774688 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.152) 0:01:10.600 ********* 2025-08-29 14:46:15.774699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:15.774710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:15.774721 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:15.774731 | orchestrator | 2025-08-29 14:46:15.774742 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:46:15.774753 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.152) 0:01:10.752 ********* 2025-08-29 14:46:15.774773 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.831598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.831743 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.831761 | orchestrator | 2025-08-29 14:46:18.831774 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:46:18.831787 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.156) 0:01:10.908 ********* 2025-08-29 14:46:18.831799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.831811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.831822 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.831833 | orchestrator | 2025-08-29 14:46:18.831844 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:46:18.831855 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.153) 0:01:11.062 ********* 2025-08-29 14:46:18.831866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.831907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.831919 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.831929 | orchestrator | 2025-08-29 14:46:18.831941 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:46:18.831951 | orchestrator | Friday 29 August 2025 14:46:16 +0000 (0:00:00.161) 0:01:11.223 ********* 2025-08-29 14:46:18.831962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.831973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.831984 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.831995 | orchestrator | 2025-08-29 14:46:18.832006 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:46:18.832034 | orchestrator | Friday 29 August 2025 14:46:16 +0000 (0:00:00.157) 0:01:11.381 ********* 2025-08-29 14:46:18.832045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.832056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.832067 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.832078 | orchestrator | 2025-08-29 14:46:18.832089 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:46:18.832100 | orchestrator | Friday 29 August 2025 14:46:16 +0000 (0:00:00.350) 0:01:11.731 ********* 2025-08-29 14:46:18.832113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.832126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.832138 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.832150 | orchestrator | 2025-08-29 14:46:18.832162 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:46:18.832174 | orchestrator | Friday 29 August 2025 14:46:16 +0000 (0:00:00.147) 0:01:11.878 ********* 2025-08-29 14:46:18.832185 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:18.832199 | orchestrator | 2025-08-29 14:46:18.832211 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:46:18.832222 | orchestrator | Friday 29 August 2025 14:46:17 +0000 (0:00:00.581) 0:01:12.460 ********* 2025-08-29 14:46:18.832235 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:18.832270 | orchestrator | 2025-08-29 14:46:18.832283 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:46:18.832295 | orchestrator | Friday 29 August 2025 14:46:17 +0000 (0:00:00.555) 0:01:13.016 ********* 2025-08-29 14:46:18.832306 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:18.832316 | orchestrator | 2025-08-29 14:46:18.832327 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:46:18.832338 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.145) 0:01:13.162 ********* 2025-08-29 14:46:18.832349 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'vg_name': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'}) 2025-08-29 14:46:18.832361 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'vg_name': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}) 2025-08-29 14:46:18.832371 | orchestrator | 2025-08-29 14:46:18.832382 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:46:18.832402 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.173) 0:01:13.335 ********* 2025-08-29 14:46:18.832432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.832444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.832455 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.832466 | orchestrator | 2025-08-29 14:46:18.832477 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:46:18.832488 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.153) 0:01:13.489 ********* 2025-08-29 14:46:18.832498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.832510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.832520 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.832532 | orchestrator | 2025-08-29 14:46:18.832543 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:46:18.832554 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.151) 0:01:13.640 ********* 2025-08-29 14:46:18.832565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'})  2025-08-29 14:46:18.832575 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'})  2025-08-29 14:46:18.832586 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:18.832597 | orchestrator | 2025-08-29 14:46:18.832608 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:46:18.832618 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.149) 0:01:13.790 ********* 2025-08-29 14:46:18.832629 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:46:18.832640 | orchestrator |  "lvm_report": { 2025-08-29 14:46:18.832651 | orchestrator |  "lv": [ 2025-08-29 14:46:18.832662 | orchestrator |  { 2025-08-29 14:46:18.832673 | orchestrator |  "lv_name": "osd-block-bf1413fe-a30b-500c-b995-d4125007de3c", 2025-08-29 14:46:18.832685 | orchestrator |  "vg_name": "ceph-bf1413fe-a30b-500c-b995-d4125007de3c" 2025-08-29 14:46:18.832696 | orchestrator |  }, 2025-08-29 14:46:18.832712 | orchestrator |  { 2025-08-29 14:46:18.832724 | orchestrator |  "lv_name": "osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec", 2025-08-29 14:46:18.832734 | orchestrator |  "vg_name": "ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec" 2025-08-29 14:46:18.832745 | orchestrator |  } 2025-08-29 14:46:18.832756 | orchestrator |  ], 2025-08-29 14:46:18.832767 | orchestrator |  "pv": [ 2025-08-29 14:46:18.832777 | orchestrator |  { 2025-08-29 14:46:18.832788 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:46:18.832799 | orchestrator |  "vg_name": "ceph-bf1413fe-a30b-500c-b995-d4125007de3c" 2025-08-29 14:46:18.832810 | orchestrator |  }, 2025-08-29 14:46:18.832821 | orchestrator |  { 2025-08-29 14:46:18.832832 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:46:18.832842 | orchestrator |  "vg_name": "ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec" 2025-08-29 14:46:18.832853 | orchestrator |  } 2025-08-29 14:46:18.832864 | orchestrator |  ] 2025-08-29 14:46:18.832875 | orchestrator |  } 2025-08-29 14:46:18.832886 | orchestrator | } 2025-08-29 14:46:18.832897 | orchestrator | 2025-08-29 14:46:18.832907 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:18.832919 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:46:18.832937 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:46:18.832948 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:46:18.832959 | orchestrator | 2025-08-29 14:46:18.832970 | orchestrator | 2025-08-29 14:46:18.832981 | orchestrator | 2025-08-29 14:46:18.832991 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:18.833002 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.147) 0:01:13.938 ********* 2025-08-29 14:46:18.833013 | orchestrator | =============================================================================== 2025-08-29 14:46:18.833024 | orchestrator | Create block VGs -------------------------------------------------------- 5.74s 2025-08-29 14:46:18.833034 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2025-08-29 14:46:18.833045 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.95s 2025-08-29 14:46:18.833056 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.69s 2025-08-29 14:46:18.833066 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.65s 2025-08-29 14:46:18.833077 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.65s 2025-08-29 14:46:18.833088 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.62s 2025-08-29 14:46:18.833099 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2025-08-29 14:46:18.833116 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2025-08-29 14:46:19.189039 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2025-08-29 14:46:19.189137 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2025-08-29 14:46:19.189145 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-08-29 14:46:19.189150 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.80s 2025-08-29 14:46:19.189156 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-08-29 14:46:19.189161 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.77s 2025-08-29 14:46:19.189166 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2025-08-29 14:46:19.189171 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-08-29 14:46:19.189176 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.68s 2025-08-29 14:46:19.189181 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-08-29 14:46:19.189186 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-08-29 14:46:31.494096 | orchestrator | 2025-08-29 14:46:31 | INFO  | Task a0c993bc-16b3-49c5-844a-61e611c31354 (facts) was prepared for execution. 2025-08-29 14:46:31.494238 | orchestrator | 2025-08-29 14:46:31 | INFO  | It takes a moment until task a0c993bc-16b3-49c5-844a-61e611c31354 (facts) has been started and output is visible here. 2025-08-29 14:46:43.680220 | orchestrator | 2025-08-29 14:46:43.680438 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:46:43.680456 | orchestrator | 2025-08-29 14:46:43.680467 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:46:43.680478 | orchestrator | Friday 29 August 2025 14:46:35 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-08-29 14:46:43.680488 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:46:43.680499 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:43.680509 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:46:43.680550 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:46:43.680560 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:43.680570 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:43.680579 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:43.680589 | orchestrator | 2025-08-29 14:46:43.680599 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:46:43.680608 | orchestrator | Friday 29 August 2025 14:46:36 +0000 (0:00:01.055) 0:00:01.321 ********* 2025-08-29 14:46:43.680618 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:43.680629 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:46:43.680639 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:46:43.680649 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:46:43.680659 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:43.680668 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:43.680678 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:43.680688 | orchestrator | 2025-08-29 14:46:43.680697 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:46:43.680707 | orchestrator | 2025-08-29 14:46:43.680716 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:46:43.680726 | orchestrator | Friday 29 August 2025 14:46:37 +0000 (0:00:01.239) 0:00:02.561 ********* 2025-08-29 14:46:43.680737 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:46:43.680748 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:46:43.680759 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:46:43.680769 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:43.680780 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:43.680792 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:43.680802 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:43.680813 | orchestrator | 2025-08-29 14:46:43.680823 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:46:43.680832 | orchestrator | 2025-08-29 14:46:43.680842 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:46:43.680851 | orchestrator | Friday 29 August 2025 14:46:42 +0000 (0:00:04.893) 0:00:07.455 ********* 2025-08-29 14:46:43.680861 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:43.680871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:46:43.680880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:46:43.680889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:46:43.680899 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:43.680908 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:43.680918 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:43.680927 | orchestrator | 2025-08-29 14:46:43.680937 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:43.680947 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.680958 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.680968 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.680978 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.680987 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.680997 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.681006 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:43.681016 | orchestrator | 2025-08-29 14:46:43.681026 | orchestrator | 2025-08-29 14:46:43.681043 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:43.681052 | orchestrator | Friday 29 August 2025 14:46:43 +0000 (0:00:00.552) 0:00:08.007 ********* 2025-08-29 14:46:43.681062 | orchestrator | =============================================================================== 2025-08-29 14:46:43.681072 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.89s 2025-08-29 14:46:43.681081 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-08-29 14:46:43.681096 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-08-29 14:46:43.681112 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-08-29 14:46:56.003230 | orchestrator | 2025-08-29 14:46:56 | INFO  | Task 18eb7ecc-4171-4130-aeb5-74b33900e72a (frr) was prepared for execution. 2025-08-29 14:46:56.003370 | orchestrator | 2025-08-29 14:46:56 | INFO  | It takes a moment until task 18eb7ecc-4171-4130-aeb5-74b33900e72a (frr) has been started and output is visible here. 2025-08-29 14:47:19.595340 | orchestrator | 2025-08-29 14:47:19.595399 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 14:47:19.595406 | orchestrator | 2025-08-29 14:47:19.595411 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 14:47:19.595426 | orchestrator | Friday 29 August 2025 14:46:59 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-08-29 14:47:19.595431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:47:19.595436 | orchestrator | 2025-08-29 14:47:19.595440 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 14:47:19.595444 | orchestrator | Friday 29 August 2025 14:47:00 +0000 (0:00:00.224) 0:00:00.460 ********* 2025-08-29 14:47:19.595449 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:19.595454 | orchestrator | 2025-08-29 14:47:19.595458 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 14:47:19.595462 | orchestrator | Friday 29 August 2025 14:47:01 +0000 (0:00:01.119) 0:00:01.580 ********* 2025-08-29 14:47:19.595466 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:19.595470 | orchestrator | 2025-08-29 14:47:19.595475 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 14:47:19.595481 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:09.033) 0:00:10.614 ********* 2025-08-29 14:47:19.595485 | orchestrator | ok: [testbed-manager] 2025-08-29 14:47:19.595490 | orchestrator | 2025-08-29 14:47:19.595494 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 14:47:19.595498 | orchestrator | Friday 29 August 2025 14:47:11 +0000 (0:00:01.067) 0:00:11.681 ********* 2025-08-29 14:47:19.595503 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:19.595507 | orchestrator | 2025-08-29 14:47:19.595511 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 14:47:19.595515 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:00.814) 0:00:12.496 ********* 2025-08-29 14:47:19.595519 | orchestrator | ok: [testbed-manager] 2025-08-29 14:47:19.595523 | orchestrator | 2025-08-29 14:47:19.595528 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 14:47:19.595532 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:01.056) 0:00:13.552 ********* 2025-08-29 14:47:19.595536 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:47:19.595540 | orchestrator | 2025-08-29 14:47:19.595545 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 14:47:19.595549 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.741) 0:00:14.294 ********* 2025-08-29 14:47:19.595553 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:47:19.595557 | orchestrator | 2025-08-29 14:47:19.595561 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 14:47:19.595566 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.135) 0:00:14.430 ********* 2025-08-29 14:47:19.595579 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:19.595583 | orchestrator | 2025-08-29 14:47:19.595588 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 14:47:19.595592 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.869) 0:00:15.299 ********* 2025-08-29 14:47:19.595596 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 14:47:19.595600 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 14:47:19.595605 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 14:47:19.595609 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 14:47:19.595613 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 14:47:19.595617 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 14:47:19.595621 | orchestrator | 2025-08-29 14:47:19.595626 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 14:47:19.595630 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:01.960) 0:00:17.260 ********* 2025-08-29 14:47:19.595634 | orchestrator | ok: [testbed-manager] 2025-08-29 14:47:19.595638 | orchestrator | 2025-08-29 14:47:19.595642 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 14:47:19.595646 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:01.184) 0:00:18.444 ********* 2025-08-29 14:47:19.595650 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:19.595654 | orchestrator | 2025-08-29 14:47:19.595658 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:47:19.595663 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:47:19.595667 | orchestrator | 2025-08-29 14:47:19.595671 | orchestrator | 2025-08-29 14:47:19.595675 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:47:19.595679 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:01.288) 0:00:19.732 ********* 2025-08-29 14:47:19.595683 | orchestrator | =============================================================================== 2025-08-29 14:47:19.595687 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.03s 2025-08-29 14:47:19.595691 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.96s 2025-08-29 14:47:19.595695 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.29s 2025-08-29 14:47:19.595700 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.18s 2025-08-29 14:47:19.595711 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.12s 2025-08-29 14:47:19.595716 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.07s 2025-08-29 14:47:19.595720 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.06s 2025-08-29 14:47:19.595724 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.87s 2025-08-29 14:47:19.595728 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.81s 2025-08-29 14:47:19.595732 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.74s 2025-08-29 14:47:19.595736 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-08-29 14:47:19.595741 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.14s 2025-08-29 14:47:19.777372 | orchestrator | 2025-08-29 14:47:19.780662 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 14:47:19 UTC 2025 2025-08-29 14:47:19.780686 | orchestrator | 2025-08-29 14:47:21.370401 | orchestrator | 2025-08-29 14:47:21 | INFO  | Collection nutshell is prepared for execution 2025-08-29 14:47:21.370511 | orchestrator | 2025-08-29 14:47:21 | INFO  | D [0] - dotfiles 2025-08-29 14:47:31.441161 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [0] - homer 2025-08-29 14:47:31.441355 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [0] - netdata 2025-08-29 14:47:31.441374 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [0] - openstackclient 2025-08-29 14:47:31.441387 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [0] - phpmyadmin 2025-08-29 14:47:31.441399 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [0] - common 2025-08-29 14:47:31.443762 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [1] -- loadbalancer 2025-08-29 14:47:31.443803 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [2] --- opensearch 2025-08-29 14:47:31.443916 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [2] --- mariadb-ng 2025-08-29 14:47:31.444344 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [3] ---- horizon 2025-08-29 14:47:31.444365 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [3] ---- keystone 2025-08-29 14:47:31.444500 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [4] ----- neutron 2025-08-29 14:47:31.445023 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [5] ------ wait-for-nova 2025-08-29 14:47:31.445042 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [5] ------ octavia 2025-08-29 14:47:31.447937 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [4] ----- barbican 2025-08-29 14:47:31.448044 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [4] ----- designate 2025-08-29 14:47:31.448060 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [4] ----- ironic 2025-08-29 14:47:31.448898 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [4] ----- placement 2025-08-29 14:47:31.448933 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [4] ----- magnum 2025-08-29 14:47:31.448946 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [1] -- openvswitch 2025-08-29 14:47:31.448957 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [2] --- ovn 2025-08-29 14:47:31.449308 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [1] -- memcached 2025-08-29 14:47:31.449345 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [1] -- redis 2025-08-29 14:47:31.449357 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 14:47:31.449557 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [0] - kubernetes 2025-08-29 14:47:31.452844 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [1] -- kubeconfig 2025-08-29 14:47:31.453138 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 14:47:31.453164 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [0] - ceph 2025-08-29 14:47:31.458216 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [1] -- ceph-pools 2025-08-29 14:47:31.458252 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 14:47:31.458264 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [3] ---- cephclient 2025-08-29 14:47:31.458276 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 14:47:31.458615 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 14:47:31.458762 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 14:47:31.458776 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [5] ------ glance 2025-08-29 14:47:31.459011 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [5] ------ cinder 2025-08-29 14:47:31.459206 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [5] ------ nova 2025-08-29 14:47:31.459732 | orchestrator | 2025-08-29 14:47:31 | INFO  | A [4] ----- prometheus 2025-08-29 14:47:31.459893 | orchestrator | 2025-08-29 14:47:31 | INFO  | D [5] ------ grafana 2025-08-29 14:47:31.660798 | orchestrator | 2025-08-29 14:47:31 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 14:47:31.660903 | orchestrator | 2025-08-29 14:47:31 | INFO  | Tasks are running in the background 2025-08-29 14:47:34.894998 | orchestrator | 2025-08-29 14:47:34 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 14:47:37.014168 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:37.014542 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:37.016430 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:37.017036 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:37.018940 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:37.019643 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:37.020379 | orchestrator | 2025-08-29 14:47:37 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:37.020627 | orchestrator | 2025-08-29 14:47:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:40.072882 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:40.072969 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:40.073142 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:40.073906 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:40.074357 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:40.074945 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:40.075426 | orchestrator | 2025-08-29 14:47:40 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:40.075520 | orchestrator | 2025-08-29 14:47:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:43.100787 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:43.100950 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:43.101478 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:43.102092 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:43.102582 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:43.103133 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:43.104750 | orchestrator | 2025-08-29 14:47:43 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:43.104776 | orchestrator | 2025-08-29 14:47:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:46.232026 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:46.232182 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:46.232198 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:46.232210 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:46.232428 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:46.232440 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:46.232451 | orchestrator | 2025-08-29 14:47:46 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:46.232462 | orchestrator | 2025-08-29 14:47:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:49.318642 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:49.318702 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:49.318710 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:49.318716 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:49.318723 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:49.318729 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:49.318735 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:49.318742 | orchestrator | 2025-08-29 14:47:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:52.607883 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:52.609096 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:52.610605 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:52.615333 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:52.616005 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:52.617201 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:52.619203 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:52.619219 | orchestrator | 2025-08-29 14:47:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:55.709457 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:55.715905 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state STARTED 2025-08-29 14:47:55.721319 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:55.727798 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:55.732338 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:55.737109 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:55.737587 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:55.737597 | orchestrator | 2025-08-29 14:47:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:58.933039 | orchestrator | 2025-08-29 14:47:58.933095 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 14:47:58.933103 | orchestrator | 2025-08-29 14:47:58.933110 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 14:47:58.933117 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:01.015) 0:00:01.015 ********* 2025-08-29 14:47:58.933123 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:47:58.933130 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:58.933137 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:47:58.933143 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:47:58.933149 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:47:58.933155 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:47:58.933161 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:47:58.933167 | orchestrator | 2025-08-29 14:47:58.933174 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 14:47:58.933180 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:03.830) 0:00:04.845 ********* 2025-08-29 14:47:58.933186 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:47:58.933193 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:47:58.933199 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:47:58.933206 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:47:58.933212 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:47:58.933218 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:47:58.933224 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:47:58.933231 | orchestrator | 2025-08-29 14:47:58.933237 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 14:47:58.933244 | orchestrator | Friday 29 August 2025 14:47:49 +0000 (0:00:01.275) 0:00:06.121 ********* 2025-08-29 14:47:58.933252 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.273997', 'end': '2025-08-29 14:47:49.282686', 'delta': '0:00:00.008689', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933271 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.376262', 'end': '2025-08-29 14:47:49.386233', 'delta': '0:00:00.009971', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933319 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.387278', 'end': '2025-08-29 14:47:49.397794', 'delta': '0:00:00.010516', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933352 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.306527', 'end': '2025-08-29 14:47:49.316816', 'delta': '0:00:00.010289', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933365 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.614265', 'end': '2025-08-29 14:47:49.623268', 'delta': '0:00:00.009003', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933534 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.379876', 'end': '2025-08-29 14:47:49.387437', 'delta': '0:00:00.007561', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933542 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:49.663279', 'end': '2025-08-29 14:47:49.673499', 'delta': '0:00:00.010220', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:58.933558 | orchestrator | 2025-08-29 14:47:58.933565 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 14:47:58.933571 | orchestrator | Friday 29 August 2025 14:47:51 +0000 (0:00:01.456) 0:00:07.578 ********* 2025-08-29 14:47:58.933578 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:47:58.933584 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:47:58.933590 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:47:58.933596 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:47:58.933602 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:47:58.933608 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:47:58.933615 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:47:58.933621 | orchestrator | 2025-08-29 14:47:58.933627 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 14:47:58.933633 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:01.393) 0:00:08.971 ********* 2025-08-29 14:47:58.933639 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:47:58.933645 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:47:58.933651 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:47:58.933657 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:47:58.933666 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:47:58.933672 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:47:58.933678 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:47:58.933685 | orchestrator | 2025-08-29 14:47:58.933691 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:47:58.933703 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933710 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933716 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933722 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933729 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933735 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933741 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:58.933747 | orchestrator | 2025-08-29 14:47:58.933753 | orchestrator | 2025-08-29 14:47:58.933759 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:47:58.933766 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:03.204) 0:00:12.176 ********* 2025-08-29 14:47:58.933772 | orchestrator | =============================================================================== 2025-08-29 14:47:58.933778 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.83s 2025-08-29 14:47:58.933784 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.20s 2025-08-29 14:47:58.933790 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.46s 2025-08-29 14:47:58.933800 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.39s 2025-08-29 14:47:58.933806 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.28s 2025-08-29 14:47:58.933812 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:47:58.933818 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task be3ca5ef-6c65-4a88-bd15-a6260146e63a is in state SUCCESS 2025-08-29 14:47:58.933825 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:47:58.933831 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:47:58.933837 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:47:58.933843 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:47:58.933849 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:47:58.933856 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:47:58.933862 | orchestrator | 2025-08-29 14:47:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:02.012546 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:02.012602 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:02.012615 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:02.012627 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:02.012638 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:02.012648 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:02.012659 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:02.012670 | orchestrator | 2025-08-29 14:48:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:05.043124 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:05.043201 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:05.043684 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:05.044235 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:05.044703 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:05.045617 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:05.045834 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:05.045851 | orchestrator | 2025-08-29 14:48:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:08.233931 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:08.234437 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:08.236907 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:08.236934 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:08.236943 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:08.238078 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:08.239292 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:08.239404 | orchestrator | 2025-08-29 14:48:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:11.574246 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:11.575376 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:11.576422 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:11.576803 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:11.578112 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:11.579437 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:11.581657 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:11.581683 | orchestrator | 2025-08-29 14:48:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:14.877721 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:14.877809 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:14.877824 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:14.877836 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:14.877848 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:14.877859 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:14.877870 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:14.877881 | orchestrator | 2025-08-29 14:48:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:17.931896 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:17.942795 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:18.105227 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:18.105352 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:18.105380 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:18.105388 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:18.105415 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:18.105424 | orchestrator | 2025-08-29 14:48:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:21.013753 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state STARTED 2025-08-29 14:48:21.013843 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:21.013866 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:21.013885 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:21.013903 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:21.013914 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:21.013925 | orchestrator | 2025-08-29 14:48:21 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:21.013936 | orchestrator | 2025-08-29 14:48:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:24.132198 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task d75095f1-4e6a-4cfe-a745-3ff4a4bc737d is in state SUCCESS 2025-08-29 14:48:24.135285 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:24.135358 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:24.135372 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:24.135661 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:24.135683 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:24.136235 | orchestrator | 2025-08-29 14:48:24 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:24.136265 | orchestrator | 2025-08-29 14:48:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:27.217116 | orchestrator | 2025-08-29 14:48:27 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:27.217655 | orchestrator | 2025-08-29 14:48:27 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:27.218387 | orchestrator | 2025-08-29 14:48:27 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:27.219589 | orchestrator | 2025-08-29 14:48:27 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:27.221552 | orchestrator | 2025-08-29 14:48:27 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:27.221933 | orchestrator | 2025-08-29 14:48:27 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:27.221964 | orchestrator | 2025-08-29 14:48:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:30.257798 | orchestrator | 2025-08-29 14:48:30 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:30.257885 | orchestrator | 2025-08-29 14:48:30 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:30.258332 | orchestrator | 2025-08-29 14:48:30 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:30.259821 | orchestrator | 2025-08-29 14:48:30 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state STARTED 2025-08-29 14:48:30.261521 | orchestrator | 2025-08-29 14:48:30 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:30.263250 | orchestrator | 2025-08-29 14:48:30 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:30.263270 | orchestrator | 2025-08-29 14:48:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:33.297425 | orchestrator | 2025-08-29 14:48:33 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:33.297513 | orchestrator | 2025-08-29 14:48:33 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:33.297521 | orchestrator | 2025-08-29 14:48:33 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:33.297527 | orchestrator | 2025-08-29 14:48:33 | INFO  | Task 55220225-691a-4913-aee4-fe9d70cb04c2 is in state SUCCESS 2025-08-29 14:48:33.297533 | orchestrator | 2025-08-29 14:48:33 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:33.297539 | orchestrator | 2025-08-29 14:48:33 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:33.297545 | orchestrator | 2025-08-29 14:48:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:36.326940 | orchestrator | 2025-08-29 14:48:36 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:36.328264 | orchestrator | 2025-08-29 14:48:36 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:36.328706 | orchestrator | 2025-08-29 14:48:36 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:36.330598 | orchestrator | 2025-08-29 14:48:36 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:36.330993 | orchestrator | 2025-08-29 14:48:36 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:36.331411 | orchestrator | 2025-08-29 14:48:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:39.391876 | orchestrator | 2025-08-29 14:48:39 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:39.392126 | orchestrator | 2025-08-29 14:48:39 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:39.393196 | orchestrator | 2025-08-29 14:48:39 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:39.394589 | orchestrator | 2025-08-29 14:48:39 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:39.397643 | orchestrator | 2025-08-29 14:48:39 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:39.397693 | orchestrator | 2025-08-29 14:48:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:42.429655 | orchestrator | 2025-08-29 14:48:42 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state STARTED 2025-08-29 14:48:42.430214 | orchestrator | 2025-08-29 14:48:42 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:42.431075 | orchestrator | 2025-08-29 14:48:42 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:42.431995 | orchestrator | 2025-08-29 14:48:42 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:42.432922 | orchestrator | 2025-08-29 14:48:42 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:42.432959 | orchestrator | 2025-08-29 14:48:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:45.488997 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 951429d2-af80-469a-a56b-b171f5e5d99e is in state SUCCESS 2025-08-29 14:48:45.490289 | orchestrator | 2025-08-29 14:48:45.490375 | orchestrator | 2025-08-29 14:48:45.490390 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 14:48:45.490403 | orchestrator | 2025-08-29 14:48:45.490414 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 14:48:45.490426 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.608) 0:00:00.608 ********* 2025-08-29 14:48:45.490438 | orchestrator | ok: [testbed-manager] => { 2025-08-29 14:48:45.490450 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 14:48:45.490463 | orchestrator | } 2025-08-29 14:48:45.490475 | orchestrator | 2025-08-29 14:48:45.490485 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 14:48:45.490497 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.488) 0:00:01.097 ********* 2025-08-29 14:48:45.490508 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.490520 | orchestrator | 2025-08-29 14:48:45.490530 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 14:48:45.490541 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:02.441) 0:00:03.538 ********* 2025-08-29 14:48:45.490556 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 14:48:45.490575 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 14:48:45.490594 | orchestrator | 2025-08-29 14:48:45.490614 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 14:48:45.490634 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.030) 0:00:04.569 ********* 2025-08-29 14:48:45.490656 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.490669 | orchestrator | 2025-08-29 14:48:45.490680 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 14:48:45.490715 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:01.603) 0:00:06.175 ********* 2025-08-29 14:48:45.490727 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.490737 | orchestrator | 2025-08-29 14:48:45.490748 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 14:48:45.490759 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:03.385) 0:00:09.560 ********* 2025-08-29 14:48:45.490770 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 14:48:45.490781 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.490791 | orchestrator | 2025-08-29 14:48:45.490802 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 14:48:45.490813 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:26.745) 0:00:36.306 ********* 2025-08-29 14:48:45.490823 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.490834 | orchestrator | 2025-08-29 14:48:45.490858 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:45.490880 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.490893 | orchestrator | 2025-08-29 14:48:45.490903 | orchestrator | 2025-08-29 14:48:45.490914 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:45.490926 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:04.003) 0:00:40.309 ********* 2025-08-29 14:48:45.490937 | orchestrator | =============================================================================== 2025-08-29 14:48:45.490947 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.75s 2025-08-29 14:48:45.490958 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.00s 2025-08-29 14:48:45.490995 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.39s 2025-08-29 14:48:45.491006 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.44s 2025-08-29 14:48:45.491017 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.61s 2025-08-29 14:48:45.491027 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.03s 2025-08-29 14:48:45.491038 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.49s 2025-08-29 14:48:45.491048 | orchestrator | 2025-08-29 14:48:45.491059 | orchestrator | 2025-08-29 14:48:45.491069 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 14:48:45.491080 | orchestrator | 2025-08-29 14:48:45.491091 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 14:48:45.491102 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.683) 0:00:00.683 ********* 2025-08-29 14:48:45.491113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 14:48:45.491125 | orchestrator | 2025-08-29 14:48:45.491136 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 14:48:45.491146 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.369) 0:00:01.052 ********* 2025-08-29 14:48:45.491157 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 14:48:45.491167 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 14:48:45.491178 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 14:48:45.491189 | orchestrator | 2025-08-29 14:48:45.491200 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 14:48:45.491211 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:02.235) 0:00:03.288 ********* 2025-08-29 14:48:45.491222 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.491232 | orchestrator | 2025-08-29 14:48:45.491243 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 14:48:45.491254 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.306) 0:00:04.595 ********* 2025-08-29 14:48:45.491279 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 14:48:45.491291 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.491335 | orchestrator | 2025-08-29 14:48:45.491348 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 14:48:45.491358 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:33.379) 0:00:37.974 ********* 2025-08-29 14:48:45.491369 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.491380 | orchestrator | 2025-08-29 14:48:45.491438 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 14:48:45.491452 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:02.272) 0:00:40.246 ********* 2025-08-29 14:48:45.491463 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.491474 | orchestrator | 2025-08-29 14:48:45.491485 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 14:48:45.491495 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.717) 0:00:40.964 ********* 2025-08-29 14:48:45.491506 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.491517 | orchestrator | 2025-08-29 14:48:45.491528 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 14:48:45.491538 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:02.680) 0:00:43.644 ********* 2025-08-29 14:48:45.491549 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.491560 | orchestrator | 2025-08-29 14:48:45.491570 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 14:48:45.491581 | orchestrator | Friday 29 August 2025 14:48:29 +0000 (0:00:02.549) 0:00:46.194 ********* 2025-08-29 14:48:45.491591 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.491612 | orchestrator | 2025-08-29 14:48:45.491623 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 14:48:45.491634 | orchestrator | Friday 29 August 2025 14:48:30 +0000 (0:00:01.307) 0:00:47.501 ********* 2025-08-29 14:48:45.491645 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.491656 | orchestrator | 2025-08-29 14:48:45.491667 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:45.491678 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.491688 | orchestrator | 2025-08-29 14:48:45.491699 | orchestrator | 2025-08-29 14:48:45.491748 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:45.491760 | orchestrator | Friday 29 August 2025 14:48:31 +0000 (0:00:01.143) 0:00:48.645 ********* 2025-08-29 14:48:45.491771 | orchestrator | =============================================================================== 2025-08-29 14:48:45.491781 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.38s 2025-08-29 14:48:45.491792 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.68s 2025-08-29 14:48:45.491803 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.55s 2025-08-29 14:48:45.491814 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.27s 2025-08-29 14:48:45.491825 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.24s 2025-08-29 14:48:45.491835 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.31s 2025-08-29 14:48:45.491846 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.31s 2025-08-29 14:48:45.491856 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.14s 2025-08-29 14:48:45.491867 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.72s 2025-08-29 14:48:45.491877 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2025-08-29 14:48:45.491888 | orchestrator | 2025-08-29 14:48:45.491898 | orchestrator | 2025-08-29 14:48:45.491909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:48:45.491919 | orchestrator | 2025-08-29 14:48:45.491930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:48:45.491941 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.812) 0:00:00.812 ********* 2025-08-29 14:48:45.491951 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 14:48:45.491962 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 14:48:45.491972 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 14:48:45.491983 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 14:48:45.491994 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 14:48:45.492004 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 14:48:45.492015 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 14:48:45.492025 | orchestrator | 2025-08-29 14:48:45.492036 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 14:48:45.492046 | orchestrator | 2025-08-29 14:48:45.492057 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 14:48:45.492067 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:01.504) 0:00:02.316 ********* 2025-08-29 14:48:45.492093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:48:45.492106 | orchestrator | 2025-08-29 14:48:45.492117 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 14:48:45.492128 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.408) 0:00:03.725 ********* 2025-08-29 14:48:45.492145 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.492156 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:45.492167 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:45.492178 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:45.492188 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:45.492208 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:45.492219 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:45.492230 | orchestrator | 2025-08-29 14:48:45.492241 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 14:48:45.492251 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:01.686) 0:00:05.412 ********* 2025-08-29 14:48:45.492262 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:45.492273 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:45.492283 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:45.492294 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:45.492331 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.492343 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:45.492353 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:45.492364 | orchestrator | 2025-08-29 14:48:45.492375 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 14:48:45.492385 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:03.999) 0:00:09.411 ********* 2025-08-29 14:48:45.492396 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:45.492407 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:45.492418 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.492428 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:45.492439 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:45.492450 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:45.492460 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:45.492471 | orchestrator | 2025-08-29 14:48:45.492481 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 14:48:45.492492 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:02.065) 0:00:11.477 ********* 2025-08-29 14:48:45.492503 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:45.492514 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:45.492524 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:45.492535 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:45.492545 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:45.492562 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.492573 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:45.492583 | orchestrator | 2025-08-29 14:48:45.492594 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 14:48:45.492605 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:10.718) 0:00:22.196 ********* 2025-08-29 14:48:45.492615 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:45.492626 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:45.492637 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:45.492647 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:45.492658 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:45.492669 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:45.492679 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.492690 | orchestrator | 2025-08-29 14:48:45.492700 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 14:48:45.492711 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:20.418) 0:00:42.614 ********* 2025-08-29 14:48:45.492722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:48:45.492736 | orchestrator | 2025-08-29 14:48:45.492746 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 14:48:45.492757 | orchestrator | Friday 29 August 2025 14:48:28 +0000 (0:00:02.090) 0:00:44.705 ********* 2025-08-29 14:48:45.492768 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 14:48:45.492786 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 14:48:45.492797 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 14:48:45.492808 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 14:48:45.492818 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 14:48:45.492829 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 14:48:45.492839 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 14:48:45.492850 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 14:48:45.492860 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 14:48:45.492871 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 14:48:45.492881 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 14:48:45.492892 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 14:48:45.492902 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 14:48:45.492913 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 14:48:45.492923 | orchestrator | 2025-08-29 14:48:45.492934 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 14:48:45.492945 | orchestrator | Friday 29 August 2025 14:48:33 +0000 (0:00:05.627) 0:00:50.333 ********* 2025-08-29 14:48:45.492956 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.492967 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:45.492978 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:45.492988 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:45.492999 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:45.493010 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:45.493020 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:45.493031 | orchestrator | 2025-08-29 14:48:45.493041 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 14:48:45.493052 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:01.242) 0:00:51.576 ********* 2025-08-29 14:48:45.493063 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.493073 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:45.493084 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:45.493094 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:45.493104 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:45.493115 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:45.493126 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:45.493136 | orchestrator | 2025-08-29 14:48:45.493147 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 14:48:45.493166 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:01.159) 0:00:52.735 ********* 2025-08-29 14:48:45.493177 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.493188 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:45.493198 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:45.493209 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:45.493220 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:45.493230 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:45.493241 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:45.493251 | orchestrator | 2025-08-29 14:48:45.493262 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 14:48:45.493273 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:01.222) 0:00:53.957 ********* 2025-08-29 14:48:45.493283 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:45.493294 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:45.493356 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:45.493369 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:45.493379 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:45.493390 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:45.493400 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:45.493411 | orchestrator | 2025-08-29 14:48:45.493422 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 14:48:45.493440 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:01.554) 0:00:55.512 ********* 2025-08-29 14:48:45.493451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 14:48:45.493463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:48:45.493475 | orchestrator | 2025-08-29 14:48:45.493485 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 14:48:45.493502 | orchestrator | Friday 29 August 2025 14:48:39 +0000 (0:00:01.115) 0:00:56.627 ********* 2025-08-29 14:48:45.493513 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.493523 | orchestrator | 2025-08-29 14:48:45.493534 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 14:48:45.493545 | orchestrator | Friday 29 August 2025 14:48:41 +0000 (0:00:01.665) 0:00:58.293 ********* 2025-08-29 14:48:45.493555 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:45.493566 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:45.493577 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:45.493587 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:45.493598 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:45.493608 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:45.493619 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:45.493630 | orchestrator | 2025-08-29 14:48:45.493640 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:45.493689 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493702 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493713 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493724 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493735 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493746 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493757 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:45.493767 | orchestrator | 2025-08-29 14:48:45.493778 | orchestrator | 2025-08-29 14:48:45.493789 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:45.493799 | orchestrator | Friday 29 August 2025 14:48:44 +0000 (0:00:02.888) 0:01:01.181 ********* 2025-08-29 14:48:45.493810 | orchestrator | =============================================================================== 2025-08-29 14:48:45.493821 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 20.42s 2025-08-29 14:48:45.493832 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.72s 2025-08-29 14:48:45.493843 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.63s 2025-08-29 14:48:45.493853 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.00s 2025-08-29 14:48:45.493864 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.89s 2025-08-29 14:48:45.493875 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.09s 2025-08-29 14:48:45.493884 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.07s 2025-08-29 14:48:45.493900 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.69s 2025-08-29 14:48:45.493909 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.67s 2025-08-29 14:48:45.493919 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.55s 2025-08-29 14:48:45.493928 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.50s 2025-08-29 14:48:45.493944 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.41s 2025-08-29 14:48:45.493954 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.24s 2025-08-29 14:48:45.493964 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.22s 2025-08-29 14:48:45.493974 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.16s 2025-08-29 14:48:45.493983 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.12s 2025-08-29 14:48:45.493993 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:45.494003 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:45.494529 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:45.496632 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:45.497569 | orchestrator | 2025-08-29 14:48:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:48.540745 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:48.541834 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:48.542944 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:48.544349 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:48.544376 | orchestrator | 2025-08-29 14:48:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:51.584423 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:51.586658 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:51.587959 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:51.591489 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:51.591574 | orchestrator | 2025-08-29 14:48:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:54.631521 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:54.632052 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:54.634453 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:54.636121 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:54.636140 | orchestrator | 2025-08-29 14:48:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:57.692779 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:48:57.695813 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:48:57.697124 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:48:57.699496 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:48:57.699553 | orchestrator | 2025-08-29 14:48:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:00.751014 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:00.755190 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:00.759537 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state STARTED 2025-08-29 14:49:00.763145 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:00.765133 | orchestrator | 2025-08-29 14:49:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:03.816686 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:03.817740 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:03.818110 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 4c1a4526-a2a5-475b-869f-adad74ce7dc8 is in state SUCCESS 2025-08-29 14:49:03.819487 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:03.819615 | orchestrator | 2025-08-29 14:49:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:06.854599 | orchestrator | 2025-08-29 14:49:06 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:06.854713 | orchestrator | 2025-08-29 14:49:06 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:06.854729 | orchestrator | 2025-08-29 14:49:06 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:06.854741 | orchestrator | 2025-08-29 14:49:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:09.901845 | orchestrator | 2025-08-29 14:49:09 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:09.902750 | orchestrator | 2025-08-29 14:49:09 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:09.905995 | orchestrator | 2025-08-29 14:49:09 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:09.906626 | orchestrator | 2025-08-29 14:49:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:12.949793 | orchestrator | 2025-08-29 14:49:12 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:12.950783 | orchestrator | 2025-08-29 14:49:12 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:12.952153 | orchestrator | 2025-08-29 14:49:12 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:12.952195 | orchestrator | 2025-08-29 14:49:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:16.001924 | orchestrator | 2025-08-29 14:49:16 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:16.004497 | orchestrator | 2025-08-29 14:49:16 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:16.007091 | orchestrator | 2025-08-29 14:49:16 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:16.007156 | orchestrator | 2025-08-29 14:49:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:19.056897 | orchestrator | 2025-08-29 14:49:19 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:19.058809 | orchestrator | 2025-08-29 14:49:19 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:19.059599 | orchestrator | 2025-08-29 14:49:19 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:19.059646 | orchestrator | 2025-08-29 14:49:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:22.103075 | orchestrator | 2025-08-29 14:49:22 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:22.103195 | orchestrator | 2025-08-29 14:49:22 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:22.104015 | orchestrator | 2025-08-29 14:49:22 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:22.104058 | orchestrator | 2025-08-29 14:49:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:25.146833 | orchestrator | 2025-08-29 14:49:25 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:25.150165 | orchestrator | 2025-08-29 14:49:25 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:25.159981 | orchestrator | 2025-08-29 14:49:25 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:25.160043 | orchestrator | 2025-08-29 14:49:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:28.198919 | orchestrator | 2025-08-29 14:49:28 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:28.200765 | orchestrator | 2025-08-29 14:49:28 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:28.201821 | orchestrator | 2025-08-29 14:49:28 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:28.202121 | orchestrator | 2025-08-29 14:49:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:31.236163 | orchestrator | 2025-08-29 14:49:31 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:31.237189 | orchestrator | 2025-08-29 14:49:31 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:31.237824 | orchestrator | 2025-08-29 14:49:31 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:31.237921 | orchestrator | 2025-08-29 14:49:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:34.281393 | orchestrator | 2025-08-29 14:49:34 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:34.281931 | orchestrator | 2025-08-29 14:49:34 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:34.284401 | orchestrator | 2025-08-29 14:49:34 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:34.284681 | orchestrator | 2025-08-29 14:49:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:37.327258 | orchestrator | 2025-08-29 14:49:37 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:37.329234 | orchestrator | 2025-08-29 14:49:37 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:37.330824 | orchestrator | 2025-08-29 14:49:37 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:37.331587 | orchestrator | 2025-08-29 14:49:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:40.387038 | orchestrator | 2025-08-29 14:49:40 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:40.390072 | orchestrator | 2025-08-29 14:49:40 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:40.392828 | orchestrator | 2025-08-29 14:49:40 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:40.393545 | orchestrator | 2025-08-29 14:49:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:43.434406 | orchestrator | 2025-08-29 14:49:43 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:43.435286 | orchestrator | 2025-08-29 14:49:43 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:43.437062 | orchestrator | 2025-08-29 14:49:43 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:43.437094 | orchestrator | 2025-08-29 14:49:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:46.484025 | orchestrator | 2025-08-29 14:49:46 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:46.486267 | orchestrator | 2025-08-29 14:49:46 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:46.488059 | orchestrator | 2025-08-29 14:49:46 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:46.488097 | orchestrator | 2025-08-29 14:49:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:49.540582 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:49.543000 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:49.544919 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:49.545566 | orchestrator | 2025-08-29 14:49:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:52.588231 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:52.589433 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:52.590294 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:52.590374 | orchestrator | 2025-08-29 14:49:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:55.629823 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:55.630504 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:55.632143 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:55.632187 | orchestrator | 2025-08-29 14:49:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:58.681536 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:49:58.683406 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:49:58.684751 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:49:58.684782 | orchestrator | 2025-08-29 14:49:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:01.737521 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:01.738543 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:01.741687 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:01.741732 | orchestrator | 2025-08-29 14:50:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:04.792020 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:04.794511 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:04.796627 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:04.796978 | orchestrator | 2025-08-29 14:50:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:07.845488 | orchestrator | 2025-08-29 14:50:07 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:07.846877 | orchestrator | 2025-08-29 14:50:07 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:07.849180 | orchestrator | 2025-08-29 14:50:07 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:07.849234 | orchestrator | 2025-08-29 14:50:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:10.891139 | orchestrator | 2025-08-29 14:50:10 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:10.892998 | orchestrator | 2025-08-29 14:50:10 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:10.894767 | orchestrator | 2025-08-29 14:50:10 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:10.894816 | orchestrator | 2025-08-29 14:50:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:13.963365 | orchestrator | 2025-08-29 14:50:13 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:13.965903 | orchestrator | 2025-08-29 14:50:13 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:13.967493 | orchestrator | 2025-08-29 14:50:13 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:13.967526 | orchestrator | 2025-08-29 14:50:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:17.032854 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:17.035929 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:17.037750 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:17.038797 | orchestrator | 2025-08-29 14:50:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:20.098764 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:20.101888 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:20.105197 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:20.105250 | orchestrator | 2025-08-29 14:50:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:23.154910 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:23.156057 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:23.159815 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:23.160552 | orchestrator | 2025-08-29 14:50:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:26.231252 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:26.233279 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:26.235113 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:26.235153 | orchestrator | 2025-08-29 14:50:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:29.307443 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:29.309705 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:29.310660 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:29.310694 | orchestrator | 2025-08-29 14:50:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:32.374117 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:32.378231 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:32.383378 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:32.386218 | orchestrator | 2025-08-29 14:50:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:35.435444 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:35.436203 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:35.437811 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:35.437829 | orchestrator | 2025-08-29 14:50:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:38.511875 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:38.514528 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:38.516823 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:38.517968 | orchestrator | 2025-08-29 14:50:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:41.558351 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:41.558551 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:41.559893 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:41.559923 | orchestrator | 2025-08-29 14:50:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:44.590871 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:44.591164 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:44.592176 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:44.592212 | orchestrator | 2025-08-29 14:50:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:47.618132 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:47.618489 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:47.619554 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:47.619581 | orchestrator | 2025-08-29 14:50:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:50.660658 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:50.662380 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:50.664206 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:50.664575 | orchestrator | 2025-08-29 14:50:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:53.706396 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:53.707689 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state STARTED 2025-08-29 14:50:53.709276 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:53.709344 | orchestrator | 2025-08-29 14:50:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:56.759922 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:50:56.760254 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:50:56.760903 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:56.766937 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 63911e8f-0f21-41fa-8a73-360cee670fb5 is in state SUCCESS 2025-08-29 14:50:56.769805 | orchestrator | 2025-08-29 14:50:56.769885 | orchestrator | 2025-08-29 14:50:56.769908 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 14:50:56.769921 | orchestrator | 2025-08-29 14:50:56.769933 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 14:50:56.769944 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.192) 0:00:00.192 ********* 2025-08-29 14:50:56.769955 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:56.769967 | orchestrator | 2025-08-29 14:50:56.769979 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 14:50:56.770000 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.868) 0:00:01.061 ********* 2025-08-29 14:50:56.770075 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 14:50:56.770121 | orchestrator | 2025-08-29 14:50:56.770134 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 14:50:56.770146 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.559) 0:00:01.620 ********* 2025-08-29 14:50:56.770157 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.770168 | orchestrator | 2025-08-29 14:50:56.770179 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 14:50:56.770190 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:01.645) 0:00:03.266 ********* 2025-08-29 14:50:56.770201 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 14:50:56.770232 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:56.770288 | orchestrator | 2025-08-29 14:50:56.770333 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 14:50:56.770353 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:50.093) 0:00:53.359 ********* 2025-08-29 14:50:56.770375 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.770393 | orchestrator | 2025-08-29 14:50:56.770410 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:50:56.770421 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:56.770433 | orchestrator | 2025-08-29 14:50:56.770444 | orchestrator | 2025-08-29 14:50:56.770454 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:50:56.770465 | orchestrator | Friday 29 August 2025 14:49:02 +0000 (0:00:08.463) 0:01:01.823 ********* 2025-08-29 14:50:56.770476 | orchestrator | =============================================================================== 2025-08-29 14:50:56.770487 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 50.09s 2025-08-29 14:50:56.770498 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.46s 2025-08-29 14:50:56.770508 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.65s 2025-08-29 14:50:56.770519 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.87s 2025-08-29 14:50:56.770530 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2025-08-29 14:50:56.770541 | orchestrator | 2025-08-29 14:50:56.770551 | orchestrator | 2025-08-29 14:50:56.770568 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 14:50:56.770588 | orchestrator | 2025-08-29 14:50:56.770607 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:50:56.770626 | orchestrator | Friday 29 August 2025 14:47:36 +0000 (0:00:00.270) 0:00:00.270 ********* 2025-08-29 14:50:56.770639 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:50:56.770651 | orchestrator | 2025-08-29 14:50:56.770662 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 14:50:56.770673 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:01.225) 0:00:01.495 ********* 2025-08-29 14:50:56.770684 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770695 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770706 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770716 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770727 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770738 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770749 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770760 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770771 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.770782 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770793 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.770804 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:56.770815 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770837 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.770848 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770859 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770890 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.770921 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:56.770932 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.770943 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.770961 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:56.771020 | orchestrator | 2025-08-29 14:50:56.771031 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:50:56.771043 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:04.040) 0:00:05.537 ********* 2025-08-29 14:50:56.771054 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:50:56.771065 | orchestrator | 2025-08-29 14:50:56.771076 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 14:50:56.771087 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:01.178) 0:00:06.716 ********* 2025-08-29 14:50:56.771103 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771142 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.771256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771280 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.771475 | orchestrator | 2025-08-29 14:50:56.771487 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 14:50:56.771504 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:04.978) 0:00:11.695 ********* 2025-08-29 14:50:56.771521 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771533 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771610 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:56.771621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:56.771633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771791 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:56.771803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771844 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:56.771855 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:56.771871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771907 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:56.771918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.771936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.771959 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:56.771970 | orchestrator | 2025-08-29 14:50:56.771982 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 14:50:56.771993 | orchestrator | Friday 29 August 2025 14:47:50 +0000 (0:00:02.532) 0:00:14.227 ********* 2025-08-29 14:50:56.772004 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772030 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772042 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772053 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:56.772065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772105 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:56.772116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772156 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:56.772172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772237 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:56.772248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772342 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:56.772353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772364 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:56.772382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:56.772394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.772416 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:56.772427 | orchestrator | 2025-08-29 14:50:56.772438 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 14:50:56.772449 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:02.426) 0:00:16.654 ********* 2025-08-29 14:50:56.772460 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:56.772470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:56.772481 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:56.772492 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:56.772503 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:56.772514 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:56.772524 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:56.772535 | orchestrator | 2025-08-29 14:50:56.772546 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 14:50:56.772557 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:01.234) 0:00:17.889 ********* 2025-08-29 14:50:56.772568 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:56.772579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:56.772589 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:56.772600 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:56.772611 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:56.772621 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:56.772632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:56.772643 | orchestrator | 2025-08-29 14:50:56.772654 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 14:50:56.772672 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:01.663) 0:00:19.553 ********* 2025-08-29 14:50:56.772701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772753 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772833 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.772924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772949 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.772996 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.773007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.773019 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.773030 | orchestrator | 2025-08-29 14:50:56.773041 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 14:50:56.773052 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:07.649) 0:00:27.202 ********* 2025-08-29 14:50:56.773063 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:56.773075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 14:50:56.773086 | orchestrator | to this access issue: 2025-08-29 14:50:56.773097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 14:50:56.773108 | orchestrator | directory 2025-08-29 14:50:56.773119 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:56.773130 | orchestrator | 2025-08-29 14:50:56.773141 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 14:50:56.773152 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:01.327) 0:00:28.529 ********* 2025-08-29 14:50:56.773163 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:56.773174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 14:50:56.773200 | orchestrator | to this access issue: 2025-08-29 14:50:56.773221 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 14:50:56.773240 | orchestrator | directory 2025-08-29 14:50:56.773258 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:56.773270 | orchestrator | 2025-08-29 14:50:56.773281 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 14:50:56.773292 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.802) 0:00:29.331 ********* 2025-08-29 14:50:56.773339 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:56.773350 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 14:50:56.773361 | orchestrator | to this access issue: 2025-08-29 14:50:56.773373 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 14:50:56.773383 | orchestrator | directory 2025-08-29 14:50:56.773395 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:56.773406 | orchestrator | 2025-08-29 14:50:56.773438 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 14:50:56.773450 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.746) 0:00:30.078 ********* 2025-08-29 14:50:56.773461 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:56.773472 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 14:50:56.773483 | orchestrator | to this access issue: 2025-08-29 14:50:56.773495 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 14:50:56.773506 | orchestrator | directory 2025-08-29 14:50:56.773517 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:56.773528 | orchestrator | 2025-08-29 14:50:56.773539 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 14:50:56.773550 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.972) 0:00:31.050 ********* 2025-08-29 14:50:56.773561 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.773572 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.773583 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.773594 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.773604 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.773615 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.773626 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.773636 | orchestrator | 2025-08-29 14:50:56.773648 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 14:50:56.773659 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:04.316) 0:00:35.367 ********* 2025-08-29 14:50:56.773670 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773681 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773703 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773732 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773743 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:56.773755 | orchestrator | 2025-08-29 14:50:56.773766 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 14:50:56.773777 | orchestrator | Friday 29 August 2025 14:48:14 +0000 (0:00:03.497) 0:00:38.864 ********* 2025-08-29 14:50:56.773788 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.773799 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.773822 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.773841 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.773858 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.773885 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.773906 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.773923 | orchestrator | 2025-08-29 14:50:56.773940 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 14:50:56.773958 | orchestrator | Friday 29 August 2025 14:48:18 +0000 (0:00:03.302) 0:00:42.167 ********* 2025-08-29 14:50:56.773977 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.773997 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774075 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774155 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774210 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774244 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774286 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774398 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774421 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774441 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774464 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:56.774497 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774514 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774526 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.774543 | orchestrator | 2025-08-29 14:50:56.774562 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 14:50:56.774581 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:03.168) 0:00:45.335 ********* 2025-08-29 14:50:56.774600 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774619 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774639 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774657 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774676 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774694 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774712 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:56.774732 | orchestrator | 2025-08-29 14:50:56.774751 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 14:50:56.774770 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:03.211) 0:00:48.547 ********* 2025-08-29 14:50:56.774788 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774826 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774844 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774878 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774894 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:56.774911 | orchestrator | 2025-08-29 14:50:56.774928 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 14:50:56.774944 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:02.337) 0:00:50.884 ********* 2025-08-29 14:50:56.774962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.774980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.775016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.775048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.775084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.775104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775195 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.775225 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:56.775244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775459 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775477 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:56.775494 | orchestrator | 2025-08-29 14:50:56.775529 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 14:50:56.775550 | orchestrator | Friday 29 August 2025 14:48:32 +0000 (0:00:05.405) 0:00:56.290 ********* 2025-08-29 14:50:56.775566 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.775583 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.775601 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.775618 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.775635 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.775651 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.775668 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.775686 | orchestrator | 2025-08-29 14:50:56.775704 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 14:50:56.775722 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:02.265) 0:00:58.555 ********* 2025-08-29 14:50:56.775739 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.775755 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.775771 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.775787 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.775803 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.775819 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.775835 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.775851 | orchestrator | 2025-08-29 14:50:56.775868 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.775884 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:01.187) 0:00:59.743 ********* 2025-08-29 14:50:56.775901 | orchestrator | 2025-08-29 14:50:56.775917 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.775934 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.062) 0:00:59.805 ********* 2025-08-29 14:50:56.775949 | orchestrator | 2025-08-29 14:50:56.775965 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.775994 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.057) 0:00:59.863 ********* 2025-08-29 14:50:56.776010 | orchestrator | 2025-08-29 14:50:56.776027 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.776044 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.164) 0:01:00.028 ********* 2025-08-29 14:50:56.776060 | orchestrator | 2025-08-29 14:50:56.776076 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.776093 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.057) 0:01:00.086 ********* 2025-08-29 14:50:56.776129 | orchestrator | 2025-08-29 14:50:56.776147 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.776164 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:00.060) 0:01:00.146 ********* 2025-08-29 14:50:56.776181 | orchestrator | 2025-08-29 14:50:56.776197 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:56.776214 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:00.070) 0:01:00.217 ********* 2025-08-29 14:50:56.776231 | orchestrator | 2025-08-29 14:50:56.776246 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 14:50:56.776257 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:00.096) 0:01:00.314 ********* 2025-08-29 14:50:56.776276 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.776286 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.776318 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.776333 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.776343 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.776352 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.776362 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.776371 | orchestrator | 2025-08-29 14:50:56.776381 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 14:50:56.776392 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:44.492) 0:01:44.807 ********* 2025-08-29 14:50:56.776408 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.776418 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.776428 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.776438 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.776447 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.776456 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.776466 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.776475 | orchestrator | 2025-08-29 14:50:56.776485 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 14:50:56.776494 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:01:21.217) 0:03:06.025 ********* 2025-08-29 14:50:56.776504 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:56.776514 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:50:56.776524 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:50:56.776533 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:50:56.776542 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:50:56.776552 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:50:56.776561 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:50:56.776571 | orchestrator | 2025-08-29 14:50:56.776581 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 14:50:56.776590 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:02.004) 0:03:08.029 ********* 2025-08-29 14:50:56.776600 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:56.776609 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:56.776619 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:56.776629 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:56.776638 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:56.776648 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:56.776657 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:56.776667 | orchestrator | 2025-08-29 14:50:56.776676 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:50:56.776695 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776706 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776716 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776726 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776736 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776745 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776755 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:56.776764 | orchestrator | 2025-08-29 14:50:56.776775 | orchestrator | 2025-08-29 14:50:56.776784 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:50:56.776794 | orchestrator | Friday 29 August 2025 14:50:53 +0000 (0:00:09.544) 0:03:17.573 ********* 2025-08-29 14:50:56.776804 | orchestrator | =============================================================================== 2025-08-29 14:50:56.776814 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 81.22s 2025-08-29 14:50:56.776823 | orchestrator | common : Restart fluentd container ------------------------------------- 44.49s 2025-08-29 14:50:56.776833 | orchestrator | common : Restart cron container ----------------------------------------- 9.54s 2025-08-29 14:50:56.776842 | orchestrator | common : Copying over config.json files for services -------------------- 7.65s 2025-08-29 14:50:56.776852 | orchestrator | common : Check common containers ---------------------------------------- 5.41s 2025-08-29 14:50:56.776861 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.98s 2025-08-29 14:50:56.776871 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.32s 2025-08-29 14:50:56.776880 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.04s 2025-08-29 14:50:56.776889 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.50s 2025-08-29 14:50:56.776899 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.30s 2025-08-29 14:50:56.776908 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.21s 2025-08-29 14:50:56.776918 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.17s 2025-08-29 14:50:56.776927 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.53s 2025-08-29 14:50:56.776937 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.43s 2025-08-29 14:50:56.776952 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.34s 2025-08-29 14:50:56.776962 | orchestrator | common : Creating log volume -------------------------------------------- 2.27s 2025-08-29 14:50:56.776972 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.00s 2025-08-29 14:50:56.776981 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.66s 2025-08-29 14:50:56.776991 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.33s 2025-08-29 14:50:56.777008 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.23s 2025-08-29 14:50:56.777018 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:56.777028 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:50:56.777044 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:50:56.777053 | orchestrator | 2025-08-29 14:50:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:59.813673 | orchestrator | 2025-08-29 14:50:59 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:50:59.814491 | orchestrator | 2025-08-29 14:50:59 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:50:59.815353 | orchestrator | 2025-08-29 14:50:59 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:50:59.816363 | orchestrator | 2025-08-29 14:50:59 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:50:59.818632 | orchestrator | 2025-08-29 14:50:59 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:50:59.819491 | orchestrator | 2025-08-29 14:50:59 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:50:59.819528 | orchestrator | 2025-08-29 14:50:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:02.921025 | orchestrator | 2025-08-29 14:51:02 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:02.921119 | orchestrator | 2025-08-29 14:51:02 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:02.921135 | orchestrator | 2025-08-29 14:51:02 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:02.921147 | orchestrator | 2025-08-29 14:51:02 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:51:02.921158 | orchestrator | 2025-08-29 14:51:02 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:02.921169 | orchestrator | 2025-08-29 14:51:02 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:51:02.921180 | orchestrator | 2025-08-29 14:51:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:05.955761 | orchestrator | 2025-08-29 14:51:05 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:05.956397 | orchestrator | 2025-08-29 14:51:05 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:05.957857 | orchestrator | 2025-08-29 14:51:05 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:05.958647 | orchestrator | 2025-08-29 14:51:05 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:51:05.959403 | orchestrator | 2025-08-29 14:51:05 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:05.966924 | orchestrator | 2025-08-29 14:51:05 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:51:05.966979 | orchestrator | 2025-08-29 14:51:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:09.061610 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:09.061691 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:09.061700 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:09.061708 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state STARTED 2025-08-29 14:51:09.061715 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:09.061744 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:51:09.061753 | orchestrator | 2025-08-29 14:51:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:12.345714 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:12.345871 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:12.349043 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task a27a9bde-c502-4241-8ebd-97bf42a6d64f is in state STARTED 2025-08-29 14:51:12.349534 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:12.350107 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 73c82cc7-50f4-4701-80b1-d88d88781036 is in state STARTED 2025-08-29 14:51:12.351431 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 373488d3-a056-4d1d-8365-602eeeb6bacd is in state SUCCESS 2025-08-29 14:51:12.356944 | orchestrator | 2025-08-29 14:51:12.357001 | orchestrator | 2025-08-29 14:51:12.357023 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 14:51:12.357042 | orchestrator | 2025-08-29 14:51:12.357062 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 14:51:12.357097 | orchestrator | Friday 29 August 2025 14:47:36 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-08-29 14:51:12.357119 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.357141 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.357161 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.357180 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.357201 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.357223 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.357243 | orchestrator | 2025-08-29 14:51:12.357262 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 14:51:12.357273 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:00.620) 0:00:00.792 ********* 2025-08-29 14:51:12.357284 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.357328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.357339 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.357350 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.357361 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.357372 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.357382 | orchestrator | 2025-08-29 14:51:12.357394 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 14:51:12.357405 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.618) 0:00:01.410 ********* 2025-08-29 14:51:12.357416 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.357427 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.357438 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.357455 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.357473 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.357491 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.357513 | orchestrator | 2025-08-29 14:51:12.357531 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 14:51:12.357551 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.661) 0:00:02.072 ********* 2025-08-29 14:51:12.357573 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.357594 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.357615 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.357635 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.357653 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.357674 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.357692 | orchestrator | 2025-08-29 14:51:12.357709 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 14:51:12.357740 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:02.600) 0:00:04.673 ********* 2025-08-29 14:51:12.357751 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.357762 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.357773 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.357783 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.357794 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.357804 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.357814 | orchestrator | 2025-08-29 14:51:12.357825 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 14:51:12.357836 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:01.078) 0:00:05.752 ********* 2025-08-29 14:51:12.357846 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.357857 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.357867 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.357878 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.357889 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.357899 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.357910 | orchestrator | 2025-08-29 14:51:12.357920 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 14:51:12.357931 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.967) 0:00:06.720 ********* 2025-08-29 14:51:12.357941 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.357952 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.357962 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.357973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.357983 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.357994 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.358004 | orchestrator | 2025-08-29 14:51:12.358074 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 14:51:12.358089 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.653) 0:00:07.373 ********* 2025-08-29 14:51:12.358100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.358111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.358121 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.358132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.358143 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.358153 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.358172 | orchestrator | 2025-08-29 14:51:12.358191 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 14:51:12.358209 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.755) 0:00:08.129 ********* 2025-08-29 14:51:12.358228 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:12.358246 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:12.358267 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:12.358285 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:12.358328 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.358358 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:12.358378 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:12.358398 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.358415 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:12.358435 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:12.358473 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.358485 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:12.358496 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:12.358517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.358528 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.358539 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:12.358550 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:12.358560 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.358571 | orchestrator | 2025-08-29 14:51:12.358581 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 14:51:12.358592 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.646) 0:00:08.775 ********* 2025-08-29 14:51:12.358602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.358613 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.358623 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.358634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.358644 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.358655 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.358665 | orchestrator | 2025-08-29 14:51:12.358676 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 14:51:12.358687 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:01.498) 0:00:10.273 ********* 2025-08-29 14:51:12.358698 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.358709 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.358719 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.358730 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.358740 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.358751 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.358763 | orchestrator | 2025-08-29 14:51:12.358782 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 14:51:12.358800 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.073) 0:00:11.347 ********* 2025-08-29 14:51:12.358818 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.358836 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.358854 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.358873 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.358894 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.358914 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.358934 | orchestrator | 2025-08-29 14:51:12.358951 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 14:51:12.358971 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:05.988) 0:00:17.335 ********* 2025-08-29 14:51:12.358989 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.359007 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.359027 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.359045 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.359062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.359073 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.359084 | orchestrator | 2025-08-29 14:51:12.359094 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 14:51:12.359105 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:01.697) 0:00:19.033 ********* 2025-08-29 14:51:12.359116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.359126 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.359137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.359147 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.359158 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.359168 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.359179 | orchestrator | 2025-08-29 14:51:12.359190 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 14:51:12.359202 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:02.094) 0:00:21.128 ********* 2025-08-29 14:51:12.359212 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.359223 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.359242 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.359253 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.359264 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.359274 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.359285 | orchestrator | 2025-08-29 14:51:12.359343 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 14:51:12.359354 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.820) 0:00:21.949 ********* 2025-08-29 14:51:12.359365 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 14:51:12.359379 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 14:51:12.359398 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 14:51:12.359415 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 14:51:12.359433 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 14:51:12.359451 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 14:51:12.359469 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 14:51:12.359487 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 14:51:12.359507 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 14:51:12.359528 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 14:51:12.359548 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 14:51:12.359568 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 14:51:12.359585 | orchestrator | 2025-08-29 14:51:12.359613 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 14:51:12.359631 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:02.997) 0:00:24.947 ********* 2025-08-29 14:51:12.359647 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.359658 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.359669 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.359680 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.359690 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.359701 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.359711 | orchestrator | 2025-08-29 14:51:12.359732 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 14:51:12.359744 | orchestrator | 2025-08-29 14:51:12.359754 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 14:51:12.359765 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:01.613) 0:00:26.560 ********* 2025-08-29 14:51:12.359776 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.359786 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.359797 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.359808 | orchestrator | 2025-08-29 14:51:12.359818 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 14:51:12.359829 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:00.938) 0:00:27.498 ********* 2025-08-29 14:51:12.359839 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.359850 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.359861 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.359870 | orchestrator | 2025-08-29 14:51:12.359884 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 14:51:12.359894 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:01.179) 0:00:28.677 ********* 2025-08-29 14:51:12.359904 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.359913 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.359925 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.359942 | orchestrator | 2025-08-29 14:51:12.359958 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 14:51:12.359974 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:01.572) 0:00:30.250 ********* 2025-08-29 14:51:12.359989 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.360006 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.360024 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.360042 | orchestrator | 2025-08-29 14:51:12.360069 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 14:51:12.360087 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:00.878) 0:00:31.129 ********* 2025-08-29 14:51:12.360103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.360120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.360136 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.360153 | orchestrator | 2025-08-29 14:51:12.360170 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 14:51:12.360187 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.589) 0:00:31.718 ********* 2025-08-29 14:51:12.360199 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.360209 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.360218 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.360228 | orchestrator | 2025-08-29 14:51:12.360237 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 14:51:12.360247 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:01.196) 0:00:32.915 ********* 2025-08-29 14:51:12.360256 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.360265 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.360275 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.360284 | orchestrator | 2025-08-29 14:51:12.360311 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 14:51:12.360321 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:01.502) 0:00:34.417 ********* 2025-08-29 14:51:12.360330 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:12.360340 | orchestrator | 2025-08-29 14:51:12.360350 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 14:51:12.360359 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:00.424) 0:00:34.842 ********* 2025-08-29 14:51:12.360369 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.360378 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.360388 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.360397 | orchestrator | 2025-08-29 14:51:12.360408 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 14:51:12.360425 | orchestrator | Friday 29 August 2025 14:48:13 +0000 (0:00:02.012) 0:00:36.854 ********* 2025-08-29 14:51:12.360441 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.360456 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.360472 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.360488 | orchestrator | 2025-08-29 14:51:12.360506 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 14:51:12.360525 | orchestrator | Friday 29 August 2025 14:48:14 +0000 (0:00:00.783) 0:00:37.638 ********* 2025-08-29 14:51:12.360543 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.360561 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.360578 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.360593 | orchestrator | 2025-08-29 14:51:12.360611 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 14:51:12.360626 | orchestrator | Friday 29 August 2025 14:48:14 +0000 (0:00:00.731) 0:00:38.369 ********* 2025-08-29 14:51:12.360640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.360650 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.360659 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.360668 | orchestrator | 2025-08-29 14:51:12.360678 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 14:51:12.360687 | orchestrator | Friday 29 August 2025 14:48:16 +0000 (0:00:01.585) 0:00:39.955 ********* 2025-08-29 14:51:12.360696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.360706 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.360715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.360724 | orchestrator | 2025-08-29 14:51:12.360734 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 14:51:12.360743 | orchestrator | Friday 29 August 2025 14:48:16 +0000 (0:00:00.361) 0:00:40.317 ********* 2025-08-29 14:51:12.360761 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.360776 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.360786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.360795 | orchestrator | 2025-08-29 14:51:12.360805 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 14:51:12.360814 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.563) 0:00:40.881 ********* 2025-08-29 14:51:12.360824 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.360833 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.360843 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.360852 | orchestrator | 2025-08-29 14:51:12.360871 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 14:51:12.360881 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:02.175) 0:00:43.056 ********* 2025-08-29 14:51:12.360891 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:51:12.360901 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:51:12.360911 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:51:12.360921 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:51:12.360930 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:51:12.360940 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:51:12.360950 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:51:12.360968 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:51:12.360984 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:51:12.360999 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:51:12.361016 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:51:12.361032 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:51:12.361049 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.361065 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.361081 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.361099 | orchestrator | 2025-08-29 14:51:12.361118 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 14:51:12.361135 | orchestrator | Friday 29 August 2025 14:49:04 +0000 (0:00:45.124) 0:01:28.181 ********* 2025-08-29 14:51:12.361151 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.361168 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.361185 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.361201 | orchestrator | 2025-08-29 14:51:12.361219 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 14:51:12.361235 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:00.355) 0:01:28.537 ********* 2025-08-29 14:51:12.361248 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.361258 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.361267 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.361285 | orchestrator | 2025-08-29 14:51:12.361349 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 14:51:12.361361 | orchestrator | Friday 29 August 2025 14:49:06 +0000 (0:00:01.459) 0:01:29.997 ********* 2025-08-29 14:51:12.361370 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.361380 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.361389 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.361399 | orchestrator | 2025-08-29 14:51:12.361408 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 14:51:12.361418 | orchestrator | Friday 29 August 2025 14:49:07 +0000 (0:00:01.213) 0:01:31.210 ********* 2025-08-29 14:51:12.361428 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.361437 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.361446 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.361456 | orchestrator | 2025-08-29 14:51:12.361465 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 14:51:12.361475 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:23.200) 0:01:54.411 ********* 2025-08-29 14:51:12.361485 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.361494 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.361504 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.361513 | orchestrator | 2025-08-29 14:51:12.361526 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 14:51:12.361543 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:00.761) 0:01:55.172 ********* 2025-08-29 14:51:12.361561 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.361579 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.361595 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.361614 | orchestrator | 2025-08-29 14:51:12.361634 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 14:51:12.361661 | orchestrator | Friday 29 August 2025 14:49:32 +0000 (0:00:00.934) 0:01:56.106 ********* 2025-08-29 14:51:12.361680 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.361693 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.361708 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.361722 | orchestrator | 2025-08-29 14:51:12.361734 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 14:51:12.361742 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:00.770) 0:01:56.877 ********* 2025-08-29 14:51:12.361750 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.361765 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.361773 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.361780 | orchestrator | 2025-08-29 14:51:12.361788 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 14:51:12.361796 | orchestrator | Friday 29 August 2025 14:49:34 +0000 (0:00:00.770) 0:01:57.648 ********* 2025-08-29 14:51:12.361804 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.361811 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.361819 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.361827 | orchestrator | 2025-08-29 14:51:12.361835 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 14:51:12.361842 | orchestrator | Friday 29 August 2025 14:49:34 +0000 (0:00:00.306) 0:01:57.954 ********* 2025-08-29 14:51:12.361850 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.361859 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.361873 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.361886 | orchestrator | 2025-08-29 14:51:12.361898 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 14:51:12.361911 | orchestrator | Friday 29 August 2025 14:49:35 +0000 (0:00:01.151) 0:01:59.106 ********* 2025-08-29 14:51:12.361924 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.361939 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.361953 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.361968 | orchestrator | 2025-08-29 14:51:12.361982 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 14:51:12.362006 | orchestrator | Friday 29 August 2025 14:49:36 +0000 (0:00:00.838) 0:01:59.945 ********* 2025-08-29 14:51:12.362050 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.362060 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.362067 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.362075 | orchestrator | 2025-08-29 14:51:12.362083 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 14:51:12.362091 | orchestrator | Friday 29 August 2025 14:49:37 +0000 (0:00:01.053) 0:02:00.998 ********* 2025-08-29 14:51:12.362098 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:12.362106 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:12.362114 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:12.362122 | orchestrator | 2025-08-29 14:51:12.362130 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 14:51:12.362137 | orchestrator | Friday 29 August 2025 14:49:38 +0000 (0:00:00.904) 0:02:01.903 ********* 2025-08-29 14:51:12.362145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.362153 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.362161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.362173 | orchestrator | 2025-08-29 14:51:12.362185 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 14:51:12.362198 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.623) 0:02:02.527 ********* 2025-08-29 14:51:12.362212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.362225 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.362239 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.362254 | orchestrator | 2025-08-29 14:51:12.362269 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 14:51:12.362283 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.394) 0:02:02.921 ********* 2025-08-29 14:51:12.362318 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.362331 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.362344 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.362357 | orchestrator | 2025-08-29 14:51:12.362368 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 14:51:12.362376 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.728) 0:02:03.650 ********* 2025-08-29 14:51:12.362383 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.362391 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.362399 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.362407 | orchestrator | 2025-08-29 14:51:12.362415 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 14:51:12.362422 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.705) 0:02:04.355 ********* 2025-08-29 14:51:12.362430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:51:12.362438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:51:12.362446 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:51:12.362454 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:51:12.362462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:51:12.362470 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:51:12.362477 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:51:12.362485 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:51:12.362493 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:51:12.362501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 14:51:12.362521 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:51:12.362529 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:51:12.362536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 14:51:12.362552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:51:12.362560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:51:12.362568 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:51:12.362576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:51:12.362583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:51:12.362591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:51:12.362599 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:51:12.362607 | orchestrator | 2025-08-29 14:51:12.362614 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 14:51:12.362622 | orchestrator | 2025-08-29 14:51:12.362630 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 14:51:12.362638 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:03.420) 0:02:07.776 ********* 2025-08-29 14:51:12.362646 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.362653 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.362661 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.362673 | orchestrator | 2025-08-29 14:51:12.362686 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 14:51:12.362699 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.389) 0:02:08.166 ********* 2025-08-29 14:51:12.362712 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.362725 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.362740 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.362755 | orchestrator | 2025-08-29 14:51:12.362769 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 14:51:12.362784 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.633) 0:02:08.799 ********* 2025-08-29 14:51:12.362797 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.362810 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.362824 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.362837 | orchestrator | 2025-08-29 14:51:12.362847 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 14:51:12.362855 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.316) 0:02:09.116 ********* 2025-08-29 14:51:12.362863 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:51:12.362871 | orchestrator | 2025-08-29 14:51:12.362879 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 14:51:12.362886 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.639) 0:02:09.756 ********* 2025-08-29 14:51:12.362894 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.362902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.362909 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.362917 | orchestrator | 2025-08-29 14:51:12.362925 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 14:51:12.362933 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.293) 0:02:10.050 ********* 2025-08-29 14:51:12.362940 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.362948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.362956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.362975 | orchestrator | 2025-08-29 14:51:12.362983 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 14:51:12.362991 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.302) 0:02:10.352 ********* 2025-08-29 14:51:12.362998 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.363006 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.363014 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.363022 | orchestrator | 2025-08-29 14:51:12.363029 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 14:51:12.363037 | orchestrator | Friday 29 August 2025 14:49:47 +0000 (0:00:00.672) 0:02:11.024 ********* 2025-08-29 14:51:12.363045 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.363053 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.363060 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.363068 | orchestrator | 2025-08-29 14:51:12.363076 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 14:51:12.363084 | orchestrator | Friday 29 August 2025 14:49:48 +0000 (0:00:00.766) 0:02:11.791 ********* 2025-08-29 14:51:12.363091 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.363099 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.363107 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.363114 | orchestrator | 2025-08-29 14:51:12.363122 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 14:51:12.363130 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:01.228) 0:02:13.020 ********* 2025-08-29 14:51:12.363138 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.363145 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.363153 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.363161 | orchestrator | 2025-08-29 14:51:12.363168 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 14:51:12.363176 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:01.263) 0:02:14.284 ********* 2025-08-29 14:51:12.363184 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:12.363192 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:12.363199 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:12.363207 | orchestrator | 2025-08-29 14:51:12.363215 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:51:12.363223 | orchestrator | 2025-08-29 14:51:12.363235 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:51:12.363243 | orchestrator | Friday 29 August 2025 14:50:03 +0000 (0:00:12.633) 0:02:26.917 ********* 2025-08-29 14:51:12.363251 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.363259 | orchestrator | 2025-08-29 14:51:12.363267 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:51:12.363275 | orchestrator | Friday 29 August 2025 14:50:04 +0000 (0:00:00.809) 0:02:27.727 ********* 2025-08-29 14:51:12.363305 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363315 | orchestrator | 2025-08-29 14:51:12.363323 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:51:12.363331 | orchestrator | Friday 29 August 2025 14:50:04 +0000 (0:00:00.462) 0:02:28.190 ********* 2025-08-29 14:51:12.363339 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:51:12.363346 | orchestrator | 2025-08-29 14:51:12.363354 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:51:12.363362 | orchestrator | Friday 29 August 2025 14:50:05 +0000 (0:00:00.579) 0:02:28.769 ********* 2025-08-29 14:51:12.363370 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363377 | orchestrator | 2025-08-29 14:51:12.363385 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:51:12.363393 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:00.961) 0:02:29.731 ********* 2025-08-29 14:51:12.363400 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363408 | orchestrator | 2025-08-29 14:51:12.363416 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:51:12.363429 | orchestrator | Friday 29 August 2025 14:50:07 +0000 (0:00:00.950) 0:02:30.681 ********* 2025-08-29 14:51:12.363437 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:12.363445 | orchestrator | 2025-08-29 14:51:12.363452 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:51:12.363460 | orchestrator | Friday 29 August 2025 14:50:08 +0000 (0:00:01.660) 0:02:32.341 ********* 2025-08-29 14:51:12.363468 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:12.363476 | orchestrator | 2025-08-29 14:51:12.363483 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:51:12.363491 | orchestrator | Friday 29 August 2025 14:50:09 +0000 (0:00:00.855) 0:02:33.197 ********* 2025-08-29 14:51:12.363499 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363506 | orchestrator | 2025-08-29 14:51:12.363514 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:51:12.363522 | orchestrator | Friday 29 August 2025 14:50:10 +0000 (0:00:00.434) 0:02:33.631 ********* 2025-08-29 14:51:12.363530 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363537 | orchestrator | 2025-08-29 14:51:12.363545 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 14:51:12.363553 | orchestrator | 2025-08-29 14:51:12.363561 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 14:51:12.363568 | orchestrator | Friday 29 August 2025 14:50:10 +0000 (0:00:00.572) 0:02:34.204 ********* 2025-08-29 14:51:12.363578 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.363591 | orchestrator | 2025-08-29 14:51:12.363605 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 14:51:12.363618 | orchestrator | Friday 29 August 2025 14:50:11 +0000 (0:00:00.234) 0:02:34.439 ********* 2025-08-29 14:51:12.363631 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:51:12.363644 | orchestrator | 2025-08-29 14:51:12.363657 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 14:51:12.363671 | orchestrator | Friday 29 August 2025 14:50:11 +0000 (0:00:00.245) 0:02:34.685 ********* 2025-08-29 14:51:12.363686 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.363700 | orchestrator | 2025-08-29 14:51:12.363715 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 14:51:12.363729 | orchestrator | Friday 29 August 2025 14:50:12 +0000 (0:00:01.033) 0:02:35.718 ********* 2025-08-29 14:51:12.363742 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.363755 | orchestrator | 2025-08-29 14:51:12.363768 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 14:51:12.363779 | orchestrator | Friday 29 August 2025 14:50:14 +0000 (0:00:02.476) 0:02:38.195 ********* 2025-08-29 14:51:12.363787 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363795 | orchestrator | 2025-08-29 14:51:12.363803 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 14:51:12.363810 | orchestrator | Friday 29 August 2025 14:50:15 +0000 (0:00:00.921) 0:02:39.117 ********* 2025-08-29 14:51:12.363818 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.363826 | orchestrator | 2025-08-29 14:51:12.363833 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 14:51:12.363841 | orchestrator | Friday 29 August 2025 14:50:16 +0000 (0:00:00.443) 0:02:39.561 ********* 2025-08-29 14:51:12.363849 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363856 | orchestrator | 2025-08-29 14:51:12.363864 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 14:51:12.363872 | orchestrator | Friday 29 August 2025 14:50:24 +0000 (0:00:08.558) 0:02:48.119 ********* 2025-08-29 14:51:12.363879 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.363887 | orchestrator | 2025-08-29 14:51:12.363895 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 14:51:12.363903 | orchestrator | Friday 29 August 2025 14:50:39 +0000 (0:00:15.162) 0:03:03.282 ********* 2025-08-29 14:51:12.363917 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.363925 | orchestrator | 2025-08-29 14:51:12.363933 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 14:51:12.363940 | orchestrator | 2025-08-29 14:51:12.363948 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 14:51:12.363956 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.534) 0:03:03.816 ********* 2025-08-29 14:51:12.363964 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.363971 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.363983 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.363991 | orchestrator | 2025-08-29 14:51:12.363999 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 14:51:12.364007 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.324) 0:03:04.141 ********* 2025-08-29 14:51:12.364015 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364023 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.364030 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.364038 | orchestrator | 2025-08-29 14:51:12.364052 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 14:51:12.364060 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.546) 0:03:04.687 ********* 2025-08-29 14:51:12.364068 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:12.364075 | orchestrator | 2025-08-29 14:51:12.364083 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 14:51:12.364091 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.540) 0:03:05.228 ********* 2025-08-29 14:51:12.364099 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364106 | orchestrator | 2025-08-29 14:51:12.364114 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 14:51:12.364122 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.208) 0:03:05.436 ********* 2025-08-29 14:51:12.364129 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364137 | orchestrator | 2025-08-29 14:51:12.364145 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 14:51:12.364153 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.185) 0:03:05.622 ********* 2025-08-29 14:51:12.364161 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364168 | orchestrator | 2025-08-29 14:51:12.364176 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 14:51:12.364184 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.191) 0:03:05.814 ********* 2025-08-29 14:51:12.364191 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364199 | orchestrator | 2025-08-29 14:51:12.364207 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 14:51:12.364215 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.496) 0:03:06.311 ********* 2025-08-29 14:51:12.364222 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364230 | orchestrator | 2025-08-29 14:51:12.364238 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 14:51:12.364245 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.244) 0:03:06.555 ********* 2025-08-29 14:51:12.364253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364261 | orchestrator | 2025-08-29 14:51:12.364269 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 14:51:12.364276 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.219) 0:03:06.775 ********* 2025-08-29 14:51:12.364284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364333 | orchestrator | 2025-08-29 14:51:12.364343 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 14:51:12.364350 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.236) 0:03:07.011 ********* 2025-08-29 14:51:12.364358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364373 | orchestrator | 2025-08-29 14:51:12.364381 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 14:51:12.364389 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.267) 0:03:07.279 ********* 2025-08-29 14:51:12.364397 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364405 | orchestrator | 2025-08-29 14:51:12.364412 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 14:51:12.364420 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.257) 0:03:07.536 ********* 2025-08-29 14:51:12.364428 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 14:51:12.364436 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 14:51:12.364444 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364451 | orchestrator | 2025-08-29 14:51:12.364459 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 14:51:12.364467 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.384) 0:03:07.921 ********* 2025-08-29 14:51:12.364475 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364482 | orchestrator | 2025-08-29 14:51:12.364490 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 14:51:12.364498 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.216) 0:03:08.138 ********* 2025-08-29 14:51:12.364506 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364514 | orchestrator | 2025-08-29 14:51:12.364521 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 14:51:12.364529 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.192) 0:03:08.330 ********* 2025-08-29 14:51:12.364537 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364545 | orchestrator | 2025-08-29 14:51:12.364552 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 14:51:12.364560 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:00.199) 0:03:08.530 ********* 2025-08-29 14:51:12.364568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364575 | orchestrator | 2025-08-29 14:51:12.364583 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 14:51:12.364593 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:00.283) 0:03:08.813 ********* 2025-08-29 14:51:12.364607 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364620 | orchestrator | 2025-08-29 14:51:12.364632 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 14:51:12.364645 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:00.253) 0:03:09.067 ********* 2025-08-29 14:51:12.364658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364672 | orchestrator | 2025-08-29 14:51:12.364687 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 14:51:12.364701 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.511) 0:03:09.578 ********* 2025-08-29 14:51:12.364720 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364735 | orchestrator | 2025-08-29 14:51:12.364748 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 14:51:12.364762 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.193) 0:03:09.771 ********* 2025-08-29 14:51:12.364776 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364788 | orchestrator | 2025-08-29 14:51:12.364799 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 14:51:12.364814 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.201) 0:03:09.972 ********* 2025-08-29 14:51:12.364822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364830 | orchestrator | 2025-08-29 14:51:12.364837 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 14:51:12.364845 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.184) 0:03:10.157 ********* 2025-08-29 14:51:12.364852 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364859 | orchestrator | 2025-08-29 14:51:12.364866 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 14:51:12.364879 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.181) 0:03:10.338 ********* 2025-08-29 14:51:12.364885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364892 | orchestrator | 2025-08-29 14:51:12.364898 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 14:51:12.364905 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:00.206) 0:03:10.545 ********* 2025-08-29 14:51:12.364911 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 14:51:12.364918 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 14:51:12.364925 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 14:51:12.364931 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 14:51:12.364938 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364944 | orchestrator | 2025-08-29 14:51:12.364951 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 14:51:12.364958 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:00.427) 0:03:10.972 ********* 2025-08-29 14:51:12.364964 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364971 | orchestrator | 2025-08-29 14:51:12.364977 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 14:51:12.364984 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:00.195) 0:03:11.168 ********* 2025-08-29 14:51:12.364990 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.364997 | orchestrator | 2025-08-29 14:51:12.365003 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 14:51:12.365010 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:00.191) 0:03:11.360 ********* 2025-08-29 14:51:12.365016 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.365023 | orchestrator | 2025-08-29 14:51:12.365029 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 14:51:12.365036 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:00.195) 0:03:11.555 ********* 2025-08-29 14:51:12.365043 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.365049 | orchestrator | 2025-08-29 14:51:12.365056 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 14:51:12.365062 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:00.186) 0:03:11.742 ********* 2025-08-29 14:51:12.365069 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 14:51:12.365075 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 14:51:12.365082 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.365089 | orchestrator | 2025-08-29 14:51:12.365095 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 14:51:12.365102 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:00.420) 0:03:12.162 ********* 2025-08-29 14:51:12.365108 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.365115 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.365121 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.365128 | orchestrator | 2025-08-29 14:51:12.365135 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 14:51:12.365141 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.411) 0:03:12.573 ********* 2025-08-29 14:51:12.365148 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.365154 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.365161 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.365167 | orchestrator | 2025-08-29 14:51:12.365174 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 14:51:12.365181 | orchestrator | 2025-08-29 14:51:12.365187 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 14:51:12.365194 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:00.879) 0:03:13.452 ********* 2025-08-29 14:51:12.365200 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:12.365212 | orchestrator | 2025-08-29 14:51:12.365218 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 14:51:12.365225 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:00.170) 0:03:13.623 ********* 2025-08-29 14:51:12.365231 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:51:12.365238 | orchestrator | 2025-08-29 14:51:12.365245 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 14:51:12.365251 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:00.415) 0:03:14.038 ********* 2025-08-29 14:51:12.365257 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:12.365264 | orchestrator | 2025-08-29 14:51:12.365271 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 14:51:12.365277 | orchestrator | 2025-08-29 14:51:12.365284 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 14:51:12.365308 | orchestrator | Friday 29 August 2025 14:50:56 +0000 (0:00:05.447) 0:03:19.486 ********* 2025-08-29 14:51:12.365316 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:12.365327 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:12.365334 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:12.365340 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:12.365347 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:12.365353 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:12.365359 | orchestrator | 2025-08-29 14:51:12.365366 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 14:51:12.365373 | orchestrator | Friday 29 August 2025 14:50:56 +0000 (0:00:00.583) 0:03:20.070 ********* 2025-08-29 14:51:12.365383 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:51:12.365390 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:51:12.365397 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:51:12.365403 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:51:12.365410 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:51:12.365417 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:51:12.365423 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:51:12.365430 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:51:12.365436 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:51:12.365443 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:51:12.365449 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:51:12.365456 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:51:12.365463 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:51:12.365469 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:51:12.365476 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:51:12.365482 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:51:12.365489 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:51:12.365495 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:51:12.365502 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:51:12.365509 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:51:12.365519 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:51:12.365526 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:51:12.365533 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:51:12.365539 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:51:12.365546 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:51:12.365552 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:51:12.365559 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:51:12.365566 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:51:12.365572 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:51:12.365579 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:51:12.365585 | orchestrator | 2025-08-29 14:51:12.365592 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 14:51:12.365599 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:11.437) 0:03:31.507 ********* 2025-08-29 14:51:12.365605 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.365615 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.365626 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.365637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.365649 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.365660 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.365671 | orchestrator | 2025-08-29 14:51:12.365683 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 14:51:12.365696 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:00.642) 0:03:32.150 ********* 2025-08-29 14:51:12.365708 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:12.365721 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:12.365731 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:12.365743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:12.365752 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:12.365758 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:12.365765 | orchestrator | 2025-08-29 14:51:12.365771 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:12.365778 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:12.365785 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 14:51:12.365792 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:51:12.365803 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:51:12.365815 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:51:12.365823 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:51:12.365829 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:51:12.365836 | orchestrator | 2025-08-29 14:51:12.365842 | orchestrator | 2025-08-29 14:51:12.365849 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:12.365861 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.733) 0:03:32.884 ********* 2025-08-29 14:51:12.365867 | orchestrator | =============================================================================== 2025-08-29 14:51:12.365874 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.12s 2025-08-29 14:51:12.365881 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.20s 2025-08-29 14:51:12.365887 | orchestrator | kubectl : Install required packages ------------------------------------ 15.16s 2025-08-29 14:51:12.365894 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.63s 2025-08-29 14:51:12.365900 | orchestrator | Manage labels ---------------------------------------------------------- 11.44s 2025-08-29 14:51:12.365907 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.56s 2025-08-29 14:51:12.365913 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.99s 2025-08-29 14:51:12.365920 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.45s 2025-08-29 14:51:12.365926 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.42s 2025-08-29 14:51:12.365933 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.00s 2025-08-29 14:51:12.365939 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.60s 2025-08-29 14:51:12.365946 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.48s 2025-08-29 14:51:12.365952 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.18s 2025-08-29 14:51:12.365959 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.09s 2025-08-29 14:51:12.365965 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.01s 2025-08-29 14:51:12.365972 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.70s 2025-08-29 14:51:12.365978 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.66s 2025-08-29 14:51:12.365985 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.61s 2025-08-29 14:51:12.365991 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.59s 2025-08-29 14:51:12.365997 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 1.57s 2025-08-29 14:51:12.366004 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:12.366011 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:51:12.366037 | orchestrator | 2025-08-29 14:51:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:15.419687 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:15.422870 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:15.424317 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task a27a9bde-c502-4241-8ebd-97bf42a6d64f is in state STARTED 2025-08-29 14:51:15.424741 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:15.427377 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 73c82cc7-50f4-4701-80b1-d88d88781036 is in state STARTED 2025-08-29 14:51:15.431936 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:15.433060 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state STARTED 2025-08-29 14:51:15.433108 | orchestrator | 2025-08-29 14:51:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:18.522745 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:18.523009 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:18.523579 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:18.523910 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task a27a9bde-c502-4241-8ebd-97bf42a6d64f is in state SUCCESS 2025-08-29 14:51:18.524537 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:18.525123 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 73c82cc7-50f4-4701-80b1-d88d88781036 is in state STARTED 2025-08-29 14:51:18.525671 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:18.526096 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 1cc1566c-f35c-49ba-a511-57ba4bceeb97 is in state SUCCESS 2025-08-29 14:51:18.526135 | orchestrator | 2025-08-29 14:51:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:21.615967 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:21.621144 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:21.623040 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:21.635650 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:21.636723 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 73c82cc7-50f4-4701-80b1-d88d88781036 is in state STARTED 2025-08-29 14:51:21.639085 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:21.641378 | orchestrator | 2025-08-29 14:51:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:24.760200 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:24.760564 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:24.760603 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:24.761556 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:24.762003 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 73c82cc7-50f4-4701-80b1-d88d88781036 is in state SUCCESS 2025-08-29 14:51:24.762983 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:24.763012 | orchestrator | 2025-08-29 14:51:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:27.847828 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:27.848357 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state STARTED 2025-08-29 14:51:27.849425 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:27.850497 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:27.851436 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:27.851595 | orchestrator | 2025-08-29 14:51:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:30.895597 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:30.897447 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task c22cad7e-0c65-4313-8e0f-24d328df54a9 is in state SUCCESS 2025-08-29 14:51:30.899461 | orchestrator | 2025-08-29 14:51:30.899510 | orchestrator | 2025-08-29 14:51:30.899517 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 14:51:30.899523 | orchestrator | 2025-08-29 14:51:30.899527 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:51:30.899531 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:00.199) 0:00:00.199 ********* 2025-08-29 14:51:30.899536 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:51:30.899541 | orchestrator | 2025-08-29 14:51:30.899545 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:51:30.899562 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:00.872) 0:00:01.072 ********* 2025-08-29 14:51:30.899567 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:30.899571 | orchestrator | 2025-08-29 14:51:30.899575 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 14:51:30.899579 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:01.411) 0:00:02.484 ********* 2025-08-29 14:51:30.899583 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:30.899587 | orchestrator | 2025-08-29 14:51:30.899591 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:30.899595 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.899601 | orchestrator | 2025-08-29 14:51:30.899605 | orchestrator | 2025-08-29 14:51:30.899609 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:30.899614 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.450) 0:00:02.934 ********* 2025-08-29 14:51:30.899621 | orchestrator | =============================================================================== 2025-08-29 14:51:30.899626 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2025-08-29 14:51:30.899629 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.87s 2025-08-29 14:51:30.899633 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.45s 2025-08-29 14:51:30.899637 | orchestrator | 2025-08-29 14:51:30.899640 | orchestrator | 2025-08-29 14:51:30.899644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:51:30.899648 | orchestrator | 2025-08-29 14:51:30.899652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:51:30.899655 | orchestrator | Friday 29 August 2025 14:51:01 +0000 (0:00:00.613) 0:00:00.613 ********* 2025-08-29 14:51:30.899659 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:30.899664 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:30.899677 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:30.899681 | orchestrator | 2025-08-29 14:51:30.899685 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:51:30.899689 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:00.813) 0:00:01.427 ********* 2025-08-29 14:51:30.899693 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 14:51:30.899697 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 14:51:30.899700 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 14:51:30.899704 | orchestrator | 2025-08-29 14:51:30.899720 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 14:51:30.899725 | orchestrator | 2025-08-29 14:51:30.899728 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 14:51:30.899732 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:01.097) 0:00:02.524 ********* 2025-08-29 14:51:30.899751 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:30.899756 | orchestrator | 2025-08-29 14:51:30.899760 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 14:51:30.899764 | orchestrator | Friday 29 August 2025 14:51:04 +0000 (0:00:01.282) 0:00:03.806 ********* 2025-08-29 14:51:30.899767 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:51:30.899772 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:51:30.899776 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:51:30.899779 | orchestrator | 2025-08-29 14:51:30.899783 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 14:51:30.899787 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:01.155) 0:00:04.962 ********* 2025-08-29 14:51:30.899790 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:51:30.899794 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:51:30.899798 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:51:30.899802 | orchestrator | 2025-08-29 14:51:30.899806 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 14:51:30.899809 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:03.140) 0:00:08.103 ********* 2025-08-29 14:51:30.899813 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:30.899817 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:30.899821 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:30.899824 | orchestrator | 2025-08-29 14:51:30.899828 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 14:51:30.899832 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:02.758) 0:00:10.861 ********* 2025-08-29 14:51:30.899836 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:30.899839 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:30.899843 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:30.899847 | orchestrator | 2025-08-29 14:51:30.899851 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:30.899854 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.899858 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.899873 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.899877 | orchestrator | 2025-08-29 14:51:30.899880 | orchestrator | 2025-08-29 14:51:30.899884 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:30.899888 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:03.321) 0:00:14.183 ********* 2025-08-29 14:51:30.899891 | orchestrator | =============================================================================== 2025-08-29 14:51:30.899898 | orchestrator | memcached : Restart memcached container --------------------------------- 3.32s 2025-08-29 14:51:30.899902 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.14s 2025-08-29 14:51:30.899906 | orchestrator | memcached : Check memcached container ----------------------------------- 2.76s 2025-08-29 14:51:30.899910 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.28s 2025-08-29 14:51:30.899913 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.16s 2025-08-29 14:51:30.899917 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2025-08-29 14:51:30.899921 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2025-08-29 14:51:30.899925 | orchestrator | 2025-08-29 14:51:30.899928 | orchestrator | 2025-08-29 14:51:30.899932 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:51:30.899940 | orchestrator | 2025-08-29 14:51:30.899944 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:51:30.899947 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:00.441) 0:00:00.441 ********* 2025-08-29 14:51:30.899951 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:30.899955 | orchestrator | 2025-08-29 14:51:30.899959 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:51:30.899962 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:00.696) 0:00:01.138 ********* 2025-08-29 14:51:30.899966 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:30.899970 | orchestrator | 2025-08-29 14:51:30.899973 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:51:30.899977 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.626) 0:00:01.764 ********* 2025-08-29 14:51:30.899981 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:51:30.899985 | orchestrator | 2025-08-29 14:51:30.899988 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:51:30.899992 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.691) 0:00:02.456 ********* 2025-08-29 14:51:30.899996 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:30.899999 | orchestrator | 2025-08-29 14:51:30.900003 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:51:30.900007 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:01.589) 0:00:04.046 ********* 2025-08-29 14:51:30.900010 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:30.900014 | orchestrator | 2025-08-29 14:51:30.900018 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:51:30.900021 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:01.002) 0:00:05.049 ********* 2025-08-29 14:51:30.900025 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:30.900029 | orchestrator | 2025-08-29 14:51:30.900033 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:51:30.900036 | orchestrator | Friday 29 August 2025 14:51:21 +0000 (0:00:02.348) 0:00:07.397 ********* 2025-08-29 14:51:30.900040 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:30.900044 | orchestrator | 2025-08-29 14:51:30.900048 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:51:30.900051 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:01.281) 0:00:08.679 ********* 2025-08-29 14:51:30.900055 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:30.900059 | orchestrator | 2025-08-29 14:51:30.900062 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:51:30.900066 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:00.558) 0:00:09.238 ********* 2025-08-29 14:51:30.900070 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:30.900073 | orchestrator | 2025-08-29 14:51:30.900077 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:30.900082 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.900087 | orchestrator | 2025-08-29 14:51:30.900093 | orchestrator | 2025-08-29 14:51:30.900099 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:30.900105 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:00.419) 0:00:09.658 ********* 2025-08-29 14:51:30.900110 | orchestrator | =============================================================================== 2025-08-29 14:51:30.900116 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.35s 2025-08-29 14:51:30.900122 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.59s 2025-08-29 14:51:30.900128 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.28s 2025-08-29 14:51:30.900132 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.00s 2025-08-29 14:51:30.900136 | orchestrator | Get home directory of operator user ------------------------------------- 0.70s 2025-08-29 14:51:30.900146 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-08-29 14:51:30.900150 | orchestrator | Create .kube directory -------------------------------------------------- 0.63s 2025-08-29 14:51:30.900153 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.56s 2025-08-29 14:51:30.900157 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.42s 2025-08-29 14:51:30.900161 | orchestrator | 2025-08-29 14:51:30.900165 | orchestrator | 2025-08-29 14:51:30.900172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:51:30.900175 | orchestrator | 2025-08-29 14:51:30.900179 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:51:30.900183 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:00.502) 0:00:00.502 ********* 2025-08-29 14:51:30.900187 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:30.900190 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:30.900194 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:30.900198 | orchestrator | 2025-08-29 14:51:30.900202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:51:30.900208 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:00.591) 0:00:01.094 ********* 2025-08-29 14:51:30.900212 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 14:51:30.900216 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 14:51:30.900219 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 14:51:30.900223 | orchestrator | 2025-08-29 14:51:30.900227 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 14:51:30.900231 | orchestrator | 2025-08-29 14:51:30.900234 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 14:51:30.900238 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:00.958) 0:00:02.053 ********* 2025-08-29 14:51:30.900242 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:30.900246 | orchestrator | 2025-08-29 14:51:30.900249 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 14:51:30.900253 | orchestrator | Friday 29 August 2025 14:51:04 +0000 (0:00:00.854) 0:00:02.907 ********* 2025-08-29 14:51:30.900259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900317 | orchestrator | 2025-08-29 14:51:30.900321 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 14:51:30.900325 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:02.324) 0:00:05.235 ********* 2025-08-29 14:51:30.900329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900365 | orchestrator | 2025-08-29 14:51:30.900369 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 14:51:30.900373 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:04.049) 0:00:09.284 ********* 2025-08-29 14:51:30.900377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900411 | orchestrator | 2025-08-29 14:51:30.900415 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 14:51:30.900419 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:03.618) 0:00:12.903 ********* 2025-08-29 14:51:30.900422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:30.900471 | orchestrator | 2025-08-29 14:51:30.900479 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:51:30.900483 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:02.054) 0:00:14.957 ********* 2025-08-29 14:51:30.900487 | orchestrator | 2025-08-29 14:51:30.900490 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:51:30.900494 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.147) 0:00:15.105 ********* 2025-08-29 14:51:30.900498 | orchestrator | 2025-08-29 14:51:30.900501 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:51:30.900505 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:00.418) 0:00:15.523 ********* 2025-08-29 14:51:30.900509 | orchestrator | 2025-08-29 14:51:30.900512 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 14:51:30.900516 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:00.214) 0:00:15.737 ********* 2025-08-29 14:51:30.900520 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:30.900524 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:30.900527 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:30.900531 | orchestrator | 2025-08-29 14:51:30.900535 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 14:51:30.900543 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:06.500) 0:00:22.238 ********* 2025-08-29 14:51:30.900547 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:30.900550 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:30.900554 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:30.900558 | orchestrator | 2025-08-29 14:51:30.900562 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:30.900566 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.900569 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.900573 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:30.900577 | orchestrator | 2025-08-29 14:51:30.900581 | orchestrator | 2025-08-29 14:51:30.900584 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:30.900588 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:05.341) 0:00:27.579 ********* 2025-08-29 14:51:30.900592 | orchestrator | =============================================================================== 2025-08-29 14:51:30.900595 | orchestrator | redis : Restart redis container ----------------------------------------- 6.50s 2025-08-29 14:51:30.900599 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.34s 2025-08-29 14:51:30.900603 | orchestrator | redis : Copying over default config.json files -------------------------- 4.05s 2025-08-29 14:51:30.900606 | orchestrator | redis : Copying over redis config files --------------------------------- 3.62s 2025-08-29 14:51:30.900610 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.33s 2025-08-29 14:51:30.900614 | orchestrator | redis : Check redis containers ------------------------------------------ 2.05s 2025-08-29 14:51:30.900617 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2025-08-29 14:51:30.900621 | orchestrator | redis : include_tasks --------------------------------------------------- 0.85s 2025-08-29 14:51:30.900625 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.78s 2025-08-29 14:51:30.900628 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2025-08-29 14:51:30.900632 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:30.901194 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:30.902677 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:30.903043 | orchestrator | 2025-08-29 14:51:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:33.941276 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:33.943040 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:33.944319 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:33.945774 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:33.945820 | orchestrator | 2025-08-29 14:51:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:36.999768 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:37.006080 | orchestrator | 2025-08-29 14:51:37 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:37.006162 | orchestrator | 2025-08-29 14:51:37 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:37.007842 | orchestrator | 2025-08-29 14:51:37 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:37.008337 | orchestrator | 2025-08-29 14:51:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:40.047362 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:40.048858 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:40.051945 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:40.053589 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:40.053640 | orchestrator | 2025-08-29 14:51:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:43.100545 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:43.100645 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:43.100660 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:43.100673 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:43.100684 | orchestrator | 2025-08-29 14:51:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:46.201773 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:46.201832 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:46.201844 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:46.201854 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:46.201865 | orchestrator | 2025-08-29 14:51:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:49.183459 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:49.187027 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:49.188199 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:49.189038 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:49.189069 | orchestrator | 2025-08-29 14:51:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:52.221728 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:52.225136 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:52.225219 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:52.228215 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:52.229087 | orchestrator | 2025-08-29 14:51:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:55.283751 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:55.287713 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:55.290353 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:55.292325 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:55.292381 | orchestrator | 2025-08-29 14:51:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:58.325703 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:51:58.330745 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:51:58.331384 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:51:58.334376 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:51:58.334424 | orchestrator | 2025-08-29 14:51:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:01.372217 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:01.374698 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:01.378519 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:01.379689 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:01.380735 | orchestrator | 2025-08-29 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:04.472620 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:04.473546 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:04.474979 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:04.476366 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:04.476473 | orchestrator | 2025-08-29 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:07.529225 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:07.530897 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:07.532520 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:07.533437 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:07.535419 | orchestrator | 2025-08-29 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:10.605775 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:10.606991 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:10.608523 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:10.609912 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:10.609975 | orchestrator | 2025-08-29 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:13.665708 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:13.669999 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:13.670992 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:13.672121 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:13.672149 | orchestrator | 2025-08-29 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:16.714539 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:16.716469 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:16.717601 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:16.719424 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:16.719449 | orchestrator | 2025-08-29 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:19.753189 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:19.754530 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:19.756367 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:19.758512 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state STARTED 2025-08-29 14:52:19.759199 | orchestrator | 2025-08-29 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:22.791116 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:22.791958 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:22.792741 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:22.793983 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 1dd72f0e-8059-4920-8bc4-473e674d0095 is in state SUCCESS 2025-08-29 14:52:22.794041 | orchestrator | 2025-08-29 14:52:22.795124 | orchestrator | 2025-08-29 14:52:22.795147 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:52:22.795156 | orchestrator | 2025-08-29 14:52:22.795163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:52:22.795171 | orchestrator | Friday 29 August 2025 14:51:01 +0000 (0:00:00.703) 0:00:00.703 ********* 2025-08-29 14:52:22.795177 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:22.795192 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:22.795199 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:22.795206 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:22.795212 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:22.795218 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:22.795225 | orchestrator | 2025-08-29 14:52:22.795232 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:52:22.795238 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:01.893) 0:00:02.597 ********* 2025-08-29 14:52:22.795245 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:52:22.795252 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:52:22.795290 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:52:22.795683 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:52:22.795706 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:52:22.795713 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:52:22.795720 | orchestrator | 2025-08-29 14:52:22.795726 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 14:52:22.795733 | orchestrator | 2025-08-29 14:52:22.795740 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 14:52:22.795747 | orchestrator | Friday 29 August 2025 14:51:04 +0000 (0:00:01.343) 0:00:03.940 ********* 2025-08-29 14:52:22.795754 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:52:22.795763 | orchestrator | 2025-08-29 14:52:22.795769 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:52:22.795776 | orchestrator | Friday 29 August 2025 14:51:07 +0000 (0:00:03.179) 0:00:07.120 ********* 2025-08-29 14:52:22.795783 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:52:22.795790 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:52:22.795796 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:52:22.795802 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:52:22.795809 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:52:22.795815 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:52:22.795821 | orchestrator | 2025-08-29 14:52:22.795827 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:52:22.795833 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:01.842) 0:00:08.963 ********* 2025-08-29 14:52:22.795840 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:52:22.795846 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:52:22.795852 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:52:22.795858 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:52:22.795863 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:52:22.795870 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:52:22.795875 | orchestrator | 2025-08-29 14:52:22.795881 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:52:22.795887 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:02.746) 0:00:11.709 ********* 2025-08-29 14:52:22.795893 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 14:52:22.795900 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:22.795907 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 14:52:22.795913 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:22.795920 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 14:52:22.795927 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:22.795934 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 14:52:22.795940 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:22.795946 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 14:52:22.795952 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:22.795959 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 14:52:22.795965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:22.795971 | orchestrator | 2025-08-29 14:52:22.795976 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 14:52:22.795982 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:03.189) 0:00:14.898 ********* 2025-08-29 14:52:22.795999 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:22.796005 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:22.796011 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:22.796017 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:22.796023 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:22.796029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:22.796035 | orchestrator | 2025-08-29 14:52:22.796041 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 14:52:22.796048 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:02.037) 0:00:16.936 ********* 2025-08-29 14:52:22.796069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796180 | orchestrator | 2025-08-29 14:52:22.796186 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 14:52:22.796193 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:02.875) 0:00:19.812 ********* 2025-08-29 14:52:22.796202 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796343 | orchestrator | 2025-08-29 14:52:22.796349 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 14:52:22.796359 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:05.408) 0:00:25.220 ********* 2025-08-29 14:52:22.796366 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:22.796373 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:22.796379 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:22.796386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:22.796394 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:22.796401 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:22.796408 | orchestrator | 2025-08-29 14:52:22.796414 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 14:52:22.796421 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:01.739) 0:00:26.959 ********* 2025-08-29 14:52:22.796428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:52:22.796556 | orchestrator | 2025-08-29 14:52:22.796562 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:52:22.796568 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:02.120) 0:00:29.080 ********* 2025-08-29 14:52:22.796574 | orchestrator | 2025-08-29 14:52:22.796580 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:52:22.796586 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.489) 0:00:29.570 ********* 2025-08-29 14:52:22.796593 | orchestrator | 2025-08-29 14:52:22.796599 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:52:22.796606 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.247) 0:00:29.817 ********* 2025-08-29 14:52:22.796612 | orchestrator | 2025-08-29 14:52:22.796619 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:52:22.796625 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.255) 0:00:30.073 ********* 2025-08-29 14:52:22.796631 | orchestrator | 2025-08-29 14:52:22.796638 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:52:22.796645 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.185) 0:00:30.258 ********* 2025-08-29 14:52:22.796651 | orchestrator | 2025-08-29 14:52:22.796658 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:52:22.796664 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.177) 0:00:30.436 ********* 2025-08-29 14:52:22.796671 | orchestrator | 2025-08-29 14:52:22.796678 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 14:52:22.796684 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:00.172) 0:00:30.608 ********* 2025-08-29 14:52:22.796690 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:22.796697 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:22.796704 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:22.796710 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:22.796717 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:22.796723 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:22.796729 | orchestrator | 2025-08-29 14:52:22.796735 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 14:52:22.796741 | orchestrator | Friday 29 August 2025 14:51:46 +0000 (0:00:15.610) 0:00:46.218 ********* 2025-08-29 14:52:22.796747 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:22.796753 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:22.796759 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:22.796765 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:22.796772 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:22.796778 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:22.796784 | orchestrator | 2025-08-29 14:52:22.796791 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:52:22.796797 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:01.488) 0:00:47.708 ********* 2025-08-29 14:52:22.796804 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:22.796810 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:22.796817 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:22.796823 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:22.796830 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:22.796836 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:22.796843 | orchestrator | 2025-08-29 14:52:22.796849 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 14:52:22.796856 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:10.627) 0:00:58.335 ********* 2025-08-29 14:52:22.796867 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 14:52:22.796873 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 14:52:22.796880 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 14:52:22.796892 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 14:52:22.796899 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 14:52:22.796905 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 14:52:22.796912 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 14:52:22.796918 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 14:52:22.796928 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 14:52:22.796935 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 14:52:22.796942 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 14:52:22.796949 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 14:52:22.796955 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:52:22.796962 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:52:22.796968 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:52:22.796975 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:52:22.796981 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:52:22.796988 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:52:22.796994 | orchestrator | 2025-08-29 14:52:22.797001 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 14:52:22.797008 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:07.273) 0:01:05.608 ********* 2025-08-29 14:52:22.797014 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 14:52:22.797020 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:22.797027 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 14:52:22.797033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:22.797039 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 14:52:22.797046 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:22.797052 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 14:52:22.797059 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 14:52:22.797065 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 14:52:22.797072 | orchestrator | 2025-08-29 14:52:22.797077 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 14:52:22.797083 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:02.851) 0:01:08.460 ********* 2025-08-29 14:52:22.797090 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:52:22.797095 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:52:22.797100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:22.797106 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:52:22.797112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:22.797119 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:22.797125 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:52:22.797136 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:52:22.797142 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:52:22.797148 | orchestrator | 2025-08-29 14:52:22.797154 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:52:22.797161 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:04.403) 0:01:12.864 ********* 2025-08-29 14:52:22.797167 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:22.797174 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:22.797180 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:22.797186 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:22.797193 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:22.797199 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:22.797205 | orchestrator | 2025-08-29 14:52:22.797213 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:22.797220 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:52:22.797232 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:52:22.797239 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:52:22.797246 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:52:22.797253 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:52:22.797259 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:52:22.797281 | orchestrator | 2025-08-29 14:52:22.797287 | orchestrator | 2025-08-29 14:52:22.797294 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:22.797300 | orchestrator | Friday 29 August 2025 14:52:22 +0000 (0:00:08.821) 0:01:21.685 ********* 2025-08-29 14:52:22.797310 | orchestrator | =============================================================================== 2025-08-29 14:52:22.797317 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.45s 2025-08-29 14:52:22.797323 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 15.61s 2025-08-29 14:52:22.797330 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.27s 2025-08-29 14:52:22.797336 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.41s 2025-08-29 14:52:22.797343 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.40s 2025-08-29 14:52:22.797349 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.19s 2025-08-29 14:52:22.797355 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.18s 2025-08-29 14:52:22.797362 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.88s 2025-08-29 14:52:22.797368 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.85s 2025-08-29 14:52:22.797375 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.75s 2025-08-29 14:52:22.797381 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.12s 2025-08-29 14:52:22.797387 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.04s 2025-08-29 14:52:22.797394 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.90s 2025-08-29 14:52:22.797400 | orchestrator | module-load : Load modules ---------------------------------------------- 1.84s 2025-08-29 14:52:22.797406 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.74s 2025-08-29 14:52:22.797418 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.53s 2025-08-29 14:52:22.797424 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.49s 2025-08-29 14:52:22.797431 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2025-08-29 14:52:22.797437 | orchestrator | 2025-08-29 14:52:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:25.825468 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:25.827319 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:25.828446 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:25.829576 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:25.829965 | orchestrator | 2025-08-29 14:52:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:28.865386 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:28.866559 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:28.868601 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:28.869976 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:28.870150 | orchestrator | 2025-08-29 14:52:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:31.922304 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:31.925361 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:31.926386 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:31.927543 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:31.927575 | orchestrator | 2025-08-29 14:52:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:35.151883 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:35.152017 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:35.155756 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:35.157209 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:35.158048 | orchestrator | 2025-08-29 14:52:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:38.186824 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:38.191995 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:38.192108 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:38.193571 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:38.193614 | orchestrator | 2025-08-29 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:41.236417 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:41.238235 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:41.240076 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:41.241807 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:41.241837 | orchestrator | 2025-08-29 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:44.304386 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:44.305797 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:44.307112 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:44.307797 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:44.307830 | orchestrator | 2025-08-29 14:52:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:47.344733 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:47.346528 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:47.348417 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:47.350067 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:47.350316 | orchestrator | 2025-08-29 14:52:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:50.384847 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:50.385882 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:50.386461 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:50.387810 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:50.388813 | orchestrator | 2025-08-29 14:52:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:53.441704 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:53.443354 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:53.444724 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:53.446597 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:53.446689 | orchestrator | 2025-08-29 14:52:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:56.491034 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:56.491404 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:56.494987 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:56.496863 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:56.497152 | orchestrator | 2025-08-29 14:52:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:59.537460 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:52:59.537594 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:52:59.537633 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:52:59.538180 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:52:59.538208 | orchestrator | 2025-08-29 14:52:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:02.586359 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:02.588423 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:02.591554 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:02.592991 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:02.593504 | orchestrator | 2025-08-29 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:05.633454 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:05.635205 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:05.637684 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:05.640853 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:05.640952 | orchestrator | 2025-08-29 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:08.683074 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:08.683467 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:08.685339 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:08.686167 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:08.686201 | orchestrator | 2025-08-29 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:11.731543 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:11.732085 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:11.733005 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:11.734132 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:11.734182 | orchestrator | 2025-08-29 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:14.827494 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:14.828399 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:14.829501 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:14.830830 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:14.830879 | orchestrator | 2025-08-29 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:17.870867 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:17.871718 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:17.874154 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:17.876574 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:17.876638 | orchestrator | 2025-08-29 14:53:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:20.936749 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:20.942095 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:20.942218 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:20.942269 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:20.942287 | orchestrator | 2025-08-29 14:53:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:23.981181 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:23.983765 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:23.984797 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:23.986815 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:23.987512 | orchestrator | 2025-08-29 14:53:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:27.082396 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:27.084626 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:27.086888 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:27.089343 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:27.089392 | orchestrator | 2025-08-29 14:53:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:30.161838 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:30.161951 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:30.166357 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:30.168687 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:30.168763 | orchestrator | 2025-08-29 14:53:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:33.210888 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:33.213423 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:33.216591 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:33.219579 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:33.219662 | orchestrator | 2025-08-29 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:36.261983 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:36.263240 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:36.265066 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:36.266461 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:36.266505 | orchestrator | 2025-08-29 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:39.304040 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:39.308173 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:39.311844 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:39.314570 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:39.315063 | orchestrator | 2025-08-29 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:42.366278 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:42.367945 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:42.368861 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:42.369836 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:42.370329 | orchestrator | 2025-08-29 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:45.421858 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:45.422771 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:45.424910 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:45.425808 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:45.426094 | orchestrator | 2025-08-29 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:48.467379 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:48.468512 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:48.470897 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:48.471579 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:48.471709 | orchestrator | 2025-08-29 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:51.518265 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:51.519476 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:51.521148 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:51.522883 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:51.522918 | orchestrator | 2025-08-29 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:54.562847 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:54.565243 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:54.566623 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:54.568864 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:54.568881 | orchestrator | 2025-08-29 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:57.605667 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state STARTED 2025-08-29 14:53:57.607816 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:53:57.611717 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:53:57.613432 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:53:57.613462 | orchestrator | 2025-08-29 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:00.651540 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task f855701c-c6ef-4545-9242-efc74dbde29a is in state SUCCESS 2025-08-29 14:54:00.652474 | orchestrator | 2025-08-29 14:54:00.652507 | orchestrator | 2025-08-29 14:54:00.652513 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 14:54:00.652519 | orchestrator | 2025-08-29 14:54:00.652524 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 14:54:00.652529 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-08-29 14:54:00.652534 | orchestrator | ok: [localhost] => { 2025-08-29 14:54:00.652541 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 14:54:00.652546 | orchestrator | } 2025-08-29 14:54:00.652551 | orchestrator | 2025-08-29 14:54:00.652556 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 14:54:00.652561 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:00.100) 0:00:00.268 ********* 2025-08-29 14:54:00.652567 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 14:54:00.652573 | orchestrator | ...ignoring 2025-08-29 14:54:00.652578 | orchestrator | 2025-08-29 14:54:00.652583 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 14:54:00.652600 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:03.258) 0:00:03.526 ********* 2025-08-29 14:54:00.652605 | orchestrator | skipping: [localhost] 2025-08-29 14:54:00.652610 | orchestrator | 2025-08-29 14:54:00.652615 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 14:54:00.652635 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:00.102) 0:00:03.629 ********* 2025-08-29 14:54:00.652640 | orchestrator | ok: [localhost] 2025-08-29 14:54:00.652645 | orchestrator | 2025-08-29 14:54:00.652649 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:54:00.652654 | orchestrator | 2025-08-29 14:54:00.652659 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:54:00.652663 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:00.202) 0:00:03.831 ********* 2025-08-29 14:54:00.652668 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:00.652673 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:00.652677 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:00.652682 | orchestrator | 2025-08-29 14:54:00.652687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:54:00.652692 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:00.438) 0:00:04.270 ********* 2025-08-29 14:54:00.652697 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 14:54:00.652702 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 14:54:00.652707 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 14:54:00.652712 | orchestrator | 2025-08-29 14:54:00.652716 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 14:54:00.652721 | orchestrator | 2025-08-29 14:54:00.652726 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:54:00.652730 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:01.157) 0:00:05.427 ********* 2025-08-29 14:54:00.652735 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:00.652740 | orchestrator | 2025-08-29 14:54:00.652745 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:54:00.652749 | orchestrator | Friday 29 August 2025 14:51:32 +0000 (0:00:01.427) 0:00:06.855 ********* 2025-08-29 14:54:00.652754 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:00.652759 | orchestrator | 2025-08-29 14:54:00.652763 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 14:54:00.652768 | orchestrator | Friday 29 August 2025 14:51:33 +0000 (0:00:01.503) 0:00:08.359 ********* 2025-08-29 14:54:00.652773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.652778 | orchestrator | 2025-08-29 14:54:00.652782 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 14:54:00.652787 | orchestrator | Friday 29 August 2025 14:51:34 +0000 (0:00:00.554) 0:00:08.913 ********* 2025-08-29 14:54:00.652792 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.652796 | orchestrator | 2025-08-29 14:54:00.652801 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 14:54:00.652805 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:00.506) 0:00:09.419 ********* 2025-08-29 14:54:00.652810 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.652815 | orchestrator | 2025-08-29 14:54:00.652819 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 14:54:00.652824 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:00.414) 0:00:09.834 ********* 2025-08-29 14:54:00.652828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.652833 | orchestrator | 2025-08-29 14:54:00.652838 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:54:00.652843 | orchestrator | Friday 29 August 2025 14:51:36 +0000 (0:00:00.897) 0:00:10.732 ********* 2025-08-29 14:54:00.652848 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:00.652852 | orchestrator | 2025-08-29 14:54:00.652857 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:54:00.652862 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:01.315) 0:00:12.047 ********* 2025-08-29 14:54:00.652866 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:00.652874 | orchestrator | 2025-08-29 14:54:00.652879 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 14:54:00.652884 | orchestrator | Friday 29 August 2025 14:51:38 +0000 (0:00:01.005) 0:00:13.053 ********* 2025-08-29 14:54:00.652888 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.652893 | orchestrator | 2025-08-29 14:54:00.652897 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 14:54:00.652902 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:00.508) 0:00:13.561 ********* 2025-08-29 14:54:00.652906 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.652911 | orchestrator | 2025-08-29 14:54:00.652923 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 14:54:00.652927 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:00.461) 0:00:14.023 ********* 2025-08-29 14:54:00.652940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.652948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.652954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.652963 | orchestrator | 2025-08-29 14:54:00.652968 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 14:54:00.652973 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:01.288) 0:00:15.312 ********* 2025-08-29 14:54:00.652983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.652991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.652997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.653002 | orchestrator | 2025-08-29 14:54:00.653006 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 14:54:00.653011 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:02.108) 0:00:17.420 ********* 2025-08-29 14:54:00.653016 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:00.653021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:00.653030 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:00.653034 | orchestrator | 2025-08-29 14:54:00.653039 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 14:54:00.653043 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:02.100) 0:00:19.521 ********* 2025-08-29 14:54:00.653048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:54:00.653053 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:54:00.653057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:54:00.653062 | orchestrator | 2025-08-29 14:54:00.653066 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 14:54:00.653072 | orchestrator | Friday 29 August 2025 14:51:47 +0000 (0:00:02.442) 0:00:21.963 ********* 2025-08-29 14:54:00.653077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:54:00.653082 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:54:00.653088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:54:00.653093 | orchestrator | 2025-08-29 14:54:00.653101 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 14:54:00.653107 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:01.627) 0:00:23.591 ********* 2025-08-29 14:54:00.653112 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:54:00.653150 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:54:00.653157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:54:00.653162 | orchestrator | 2025-08-29 14:54:00.653168 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 14:54:00.653173 | orchestrator | Friday 29 August 2025 14:51:52 +0000 (0:00:03.692) 0:00:27.284 ********* 2025-08-29 14:54:00.653178 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:54:00.653183 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:54:00.653188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:54:00.653211 | orchestrator | 2025-08-29 14:54:00.653223 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 14:54:00.653229 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:01.716) 0:00:29.000 ********* 2025-08-29 14:54:00.653234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:54:00.653239 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:54:00.653243 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:54:00.653248 | orchestrator | 2025-08-29 14:54:00.653253 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:54:00.653257 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:01.962) 0:00:30.963 ********* 2025-08-29 14:54:00.653262 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.653267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:00.653271 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:00.653276 | orchestrator | 2025-08-29 14:54:00.653280 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 14:54:00.653285 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:00.756) 0:00:31.719 ********* 2025-08-29 14:54:00.653299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.653307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.653322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:54:00.653330 | orchestrator | 2025-08-29 14:54:00.653341 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 14:54:00.653347 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:01.990) 0:00:33.710 ********* 2025-08-29 14:54:00.653354 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:00.653361 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:00.653368 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:00.653375 | orchestrator | 2025-08-29 14:54:00.653382 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 14:54:00.653389 | orchestrator | Friday 29 August 2025 14:52:00 +0000 (0:00:01.086) 0:00:34.796 ********* 2025-08-29 14:54:00.653396 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:00.653404 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:00.653419 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:00.653427 | orchestrator | 2025-08-29 14:54:00.653435 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 14:54:00.653443 | orchestrator | Friday 29 August 2025 14:52:11 +0000 (0:00:11.210) 0:00:46.007 ********* 2025-08-29 14:54:00.653448 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:00.653453 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:00.653457 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:00.653462 | orchestrator | 2025-08-29 14:54:00.653466 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:54:00.653471 | orchestrator | 2025-08-29 14:54:00.653475 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:54:00.653480 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:00.422) 0:00:46.429 ********* 2025-08-29 14:54:00.653485 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:00.653489 | orchestrator | 2025-08-29 14:54:00.653494 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:54:00.653498 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:00.778) 0:00:47.208 ********* 2025-08-29 14:54:00.653503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:00.653507 | orchestrator | 2025-08-29 14:54:00.653512 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:54:00.653516 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:00.340) 0:00:47.548 ********* 2025-08-29 14:54:00.653521 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:00.653525 | orchestrator | 2025-08-29 14:54:00.653530 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:54:00.653534 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:01.952) 0:00:49.501 ********* 2025-08-29 14:54:00.653539 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:00.653543 | orchestrator | 2025-08-29 14:54:00.653548 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:54:00.653552 | orchestrator | 2025-08-29 14:54:00.653557 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:54:00.653561 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:55.470) 0:01:44.971 ********* 2025-08-29 14:54:00.653566 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:00.653571 | orchestrator | 2025-08-29 14:54:00.653578 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:54:00.653585 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.639) 0:01:45.611 ********* 2025-08-29 14:54:00.653592 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:00.653600 | orchestrator | 2025-08-29 14:54:00.653612 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:54:00.653620 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.701) 0:01:46.313 ********* 2025-08-29 14:54:00.653627 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:00.653634 | orchestrator | 2025-08-29 14:54:00.653640 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:54:00.653647 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:07.142) 0:01:53.455 ********* 2025-08-29 14:54:00.653653 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:00.653660 | orchestrator | 2025-08-29 14:54:00.653667 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:54:00.653674 | orchestrator | 2025-08-29 14:54:00.653681 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:54:00.653688 | orchestrator | Friday 29 August 2025 14:53:32 +0000 (0:00:13.456) 0:02:06.911 ********* 2025-08-29 14:54:00.653695 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:00.653702 | orchestrator | 2025-08-29 14:54:00.653708 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:54:00.653715 | orchestrator | Friday 29 August 2025 14:53:33 +0000 (0:00:00.705) 0:02:07.617 ********* 2025-08-29 14:54:00.653721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:00.653735 | orchestrator | 2025-08-29 14:54:00.653743 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:54:00.653755 | orchestrator | Friday 29 August 2025 14:53:33 +0000 (0:00:00.305) 0:02:07.922 ********* 2025-08-29 14:54:00.653762 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:00.653768 | orchestrator | 2025-08-29 14:54:00.653775 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:54:00.653782 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:02.054) 0:02:09.976 ********* 2025-08-29 14:54:00.653788 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:00.653795 | orchestrator | 2025-08-29 14:54:00.653801 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 14:54:00.653809 | orchestrator | 2025-08-29 14:54:00.653815 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 14:54:00.653822 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:19.344) 0:02:29.321 ********* 2025-08-29 14:54:00.653828 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:00.653834 | orchestrator | 2025-08-29 14:54:00.653841 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 14:54:00.653848 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.791) 0:02:30.112 ********* 2025-08-29 14:54:00.653854 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:54:00.653861 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 14:54:00.653868 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:54:00.653880 | orchestrator | outward_rabbitmq_restart 2025-08-29 14:54:00.653888 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:00.653895 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:00.653902 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:00.653909 | orchestrator | 2025-08-29 14:54:00.653915 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 14:54:00.653923 | orchestrator | skipping: no hosts matched 2025-08-29 14:54:00.653956 | orchestrator | 2025-08-29 14:54:00.653966 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 14:54:00.653971 | orchestrator | skipping: no hosts matched 2025-08-29 14:54:00.653976 | orchestrator | 2025-08-29 14:54:00.653980 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 14:54:00.653985 | orchestrator | skipping: no hosts matched 2025-08-29 14:54:00.653989 | orchestrator | 2025-08-29 14:54:00.653994 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:00.653999 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 14:54:00.654005 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 14:54:00.654009 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:54:00.654057 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:54:00.654062 | orchestrator | 2025-08-29 14:54:00.654067 | orchestrator | 2025-08-29 14:54:00.654072 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:00.654077 | orchestrator | Friday 29 August 2025 14:53:58 +0000 (0:00:02.356) 0:02:32.468 ********* 2025-08-29 14:54:00.654082 | orchestrator | =============================================================================== 2025-08-29 14:54:00.654086 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 88.27s 2025-08-29 14:54:00.654091 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 11.21s 2025-08-29 14:54:00.654095 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.15s 2025-08-29 14:54:00.654106 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.69s 2025-08-29 14:54:00.654110 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.26s 2025-08-29 14:54:00.654115 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.44s 2025-08-29 14:54:00.654120 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.36s 2025-08-29 14:54:00.654124 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2025-08-29 14:54:00.654129 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.11s 2025-08-29 14:54:00.654133 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.10s 2025-08-29 14:54:00.654138 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.99s 2025-08-29 14:54:00.654142 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.96s 2025-08-29 14:54:00.654146 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.72s 2025-08-29 14:54:00.654151 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.63s 2025-08-29 14:54:00.654156 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.50s 2025-08-29 14:54:00.654160 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.43s 2025-08-29 14:54:00.654165 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.35s 2025-08-29 14:54:00.654169 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.32s 2025-08-29 14:54:00.654174 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.29s 2025-08-29 14:54:00.654178 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2025-08-29 14:54:00.654503 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:00.657410 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:00.659568 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:00.659599 | orchestrator | 2025-08-29 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:03.718346 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:03.720007 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:03.722650 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:03.722699 | orchestrator | 2025-08-29 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:06.769229 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:06.771391 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:06.772891 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:06.773172 | orchestrator | 2025-08-29 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:09.812967 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:09.813404 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:09.814686 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:09.814777 | orchestrator | 2025-08-29 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:12.854953 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:12.856569 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:12.859506 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:12.859554 | orchestrator | 2025-08-29 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:15.896365 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:15.902527 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:15.904069 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:15.904130 | orchestrator | 2025-08-29 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:18.947855 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:18.948572 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:18.952345 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:18.952438 | orchestrator | 2025-08-29 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:21.991798 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:21.995821 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:22.003458 | orchestrator | 2025-08-29 14:54:22 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:22.003516 | orchestrator | 2025-08-29 14:54:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:25.038870 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:25.042261 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:25.043072 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:25.043099 | orchestrator | 2025-08-29 14:54:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:28.101508 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:28.101629 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:28.102559 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:28.102591 | orchestrator | 2025-08-29 14:54:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:31.146842 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:31.146965 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:31.149622 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:31.149677 | orchestrator | 2025-08-29 14:54:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:34.189096 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:34.191473 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:34.192812 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:34.193888 | orchestrator | 2025-08-29 14:54:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:37.235722 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:37.238736 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:37.242390 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:37.242442 | orchestrator | 2025-08-29 14:54:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:40.280364 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:40.281076 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:40.284797 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:40.284884 | orchestrator | 2025-08-29 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:43.377501 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:43.378310 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:43.378973 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:43.378996 | orchestrator | 2025-08-29 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:46.423480 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:46.423611 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:46.423627 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:46.423640 | orchestrator | 2025-08-29 14:54:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:49.466592 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:49.467064 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:49.467774 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:49.467976 | orchestrator | 2025-08-29 14:54:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:52.537961 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:52.540136 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:52.542616 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:52.542683 | orchestrator | 2025-08-29 14:54:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:55.597404 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:55.597592 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:55.600483 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:55.600581 | orchestrator | 2025-08-29 14:54:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:58.643087 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:54:58.643887 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:54:58.645251 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:54:58.645284 | orchestrator | 2025-08-29 14:54:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:01.689373 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:01.689537 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:01.690287 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:01.691433 | orchestrator | 2025-08-29 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:04.727587 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:04.729669 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:04.732213 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:04.732261 | orchestrator | 2025-08-29 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:07.768669 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:07.770685 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:07.773441 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:07.773870 | orchestrator | 2025-08-29 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:10.811790 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:10.811870 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:10.811879 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:10.811886 | orchestrator | 2025-08-29 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:13.872941 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:13.876582 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:13.879859 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:13.880607 | orchestrator | 2025-08-29 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:16.923797 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:16.924460 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:16.926338 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:16.926452 | orchestrator | 2025-08-29 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:19.973850 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:19.973963 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:19.973978 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:19.973990 | orchestrator | 2025-08-29 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:23.026500 | orchestrator | 2025-08-29 14:55:23 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:23.029000 | orchestrator | 2025-08-29 14:55:23 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:23.031463 | orchestrator | 2025-08-29 14:55:23 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:23.031946 | orchestrator | 2025-08-29 14:55:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:26.073702 | orchestrator | 2025-08-29 14:55:26 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:26.075402 | orchestrator | 2025-08-29 14:55:26 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:26.077194 | orchestrator | 2025-08-29 14:55:26 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:26.077226 | orchestrator | 2025-08-29 14:55:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:29.132109 | orchestrator | 2025-08-29 14:55:29 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:29.134653 | orchestrator | 2025-08-29 14:55:29 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:29.137362 | orchestrator | 2025-08-29 14:55:29 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:29.137400 | orchestrator | 2025-08-29 14:55:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:32.195338 | orchestrator | 2025-08-29 14:55:32 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:32.195672 | orchestrator | 2025-08-29 14:55:32 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state STARTED 2025-08-29 14:55:32.196772 | orchestrator | 2025-08-29 14:55:32 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:32.198482 | orchestrator | 2025-08-29 14:55:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:35.239205 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:35.245102 | orchestrator | 2025-08-29 14:55:35.245246 | orchestrator | 2025-08-29 14:55:35.245262 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:55:35.245275 | orchestrator | 2025-08-29 14:55:35.245323 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:55:35.245334 | orchestrator | Friday 29 August 2025 14:52:27 +0000 (0:00:00.211) 0:00:00.211 ********* 2025-08-29 14:55:35.245345 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:55:35.245356 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:55:35.245402 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:55:35.245412 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.245451 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.245461 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.245471 | orchestrator | 2025-08-29 14:55:35.245481 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:55:35.245491 | orchestrator | Friday 29 August 2025 14:52:27 +0000 (0:00:00.695) 0:00:00.906 ********* 2025-08-29 14:55:35.245501 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 14:55:35.245511 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 14:55:35.245521 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 14:55:35.245531 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 14:55:35.245540 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 14:55:35.245550 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 14:55:35.245560 | orchestrator | 2025-08-29 14:55:35.245570 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 14:55:35.245580 | orchestrator | 2025-08-29 14:55:35.245589 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 14:55:35.245599 | orchestrator | Friday 29 August 2025 14:52:28 +0000 (0:00:00.956) 0:00:01.863 ********* 2025-08-29 14:55:35.245610 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:35.245622 | orchestrator | 2025-08-29 14:55:35.245631 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 14:55:35.245641 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:01.581) 0:00:03.444 ********* 2025-08-29 14:55:35.245653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245795 | orchestrator | 2025-08-29 14:55:35.245805 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 14:55:35.245815 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:01.938) 0:00:05.382 ********* 2025-08-29 14:55:35.245825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245888 | orchestrator | 2025-08-29 14:55:35.245904 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 14:55:35.245919 | orchestrator | Friday 29 August 2025 14:52:34 +0000 (0:00:02.607) 0:00:07.990 ********* 2025-08-29 14:55:35.245943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.245997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246108 | orchestrator | 2025-08-29 14:55:35.246118 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 14:55:35.246153 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:01.467) 0:00:09.457 ********* 2025-08-29 14:55:35.246164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246241 | orchestrator | 2025-08-29 14:55:35.246251 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 14:55:35.246261 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:01.800) 0:00:11.258 ********* 2025-08-29 14:55:35.246271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.246392 | orchestrator | 2025-08-29 14:55:35.246407 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 14:55:35.246424 | orchestrator | Friday 29 August 2025 14:52:39 +0000 (0:00:01.838) 0:00:13.096 ********* 2025-08-29 14:55:35.246442 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:55:35.246460 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:55:35.246475 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:55:35.246488 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.246498 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.246507 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.246517 | orchestrator | 2025-08-29 14:55:35.246527 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 14:55:35.246536 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:02.646) 0:00:15.742 ********* 2025-08-29 14:55:35.246546 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 14:55:35.246556 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 14:55:35.246565 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 14:55:35.246581 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 14:55:35.246591 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 14:55:35.246601 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 14:55:35.246611 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:55:35.246620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:55:35.246630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:55:35.246639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:55:35.246648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:55:35.246658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:55:35.246668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:55:35.246679 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:55:35.246689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:55:35.246699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:55:35.246709 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:55:35.246718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:55:35.246728 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:55:35.246746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:55:35.246755 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:55:35.246765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:55:35.246774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:55:35.246784 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:55:35.246793 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:55:35.246803 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:55:35.246812 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:55:35.246822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:55:35.246831 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:55:35.246840 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:55:35.246850 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:55:35.246863 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:55:35.246884 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:55:35.246900 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:55:35.246916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:55:35.246931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:55:35.246948 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:55:35.246966 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:55:35.246982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:55:35.246998 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:55:35.247022 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:55:35.247038 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:55:35.247049 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 14:55:35.247060 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 14:55:35.247069 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 14:55:35.247079 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 14:55:35.247088 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 14:55:35.247098 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 14:55:35.247115 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:55:35.247158 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:55:35.247170 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:55:35.247180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:55:35.247190 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:55:35.247199 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:55:35.247208 | orchestrator | 2025-08-29 14:55:35.247218 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:55:35.247228 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:18.745) 0:00:34.487 ********* 2025-08-29 14:55:35.247238 | orchestrator | 2025-08-29 14:55:35.247248 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:55:35.247257 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.298) 0:00:34.786 ********* 2025-08-29 14:55:35.247267 | orchestrator | 2025-08-29 14:55:35.247276 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:55:35.247286 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.070) 0:00:34.856 ********* 2025-08-29 14:55:35.247296 | orchestrator | 2025-08-29 14:55:35.247305 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:55:35.247315 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.069) 0:00:34.925 ********* 2025-08-29 14:55:35.247324 | orchestrator | 2025-08-29 14:55:35.247334 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:55:35.247343 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.065) 0:00:34.991 ********* 2025-08-29 14:55:35.247353 | orchestrator | 2025-08-29 14:55:35.247362 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:55:35.247372 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.069) 0:00:35.060 ********* 2025-08-29 14:55:35.247381 | orchestrator | 2025-08-29 14:55:35.247391 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 14:55:35.247405 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.087) 0:00:35.148 ********* 2025-08-29 14:55:35.247422 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:55:35.247440 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:55:35.247459 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:55:35.247478 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.247496 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.247508 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.247518 | orchestrator | 2025-08-29 14:55:35.247528 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 14:55:35.247543 | orchestrator | Friday 29 August 2025 14:53:03 +0000 (0:00:01.993) 0:00:37.142 ********* 2025-08-29 14:55:35.247553 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.247563 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.247572 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.247582 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:55:35.247591 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:55:35.247600 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:55:35.247610 | orchestrator | 2025-08-29 14:55:35.247619 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 14:55:35.247629 | orchestrator | 2025-08-29 14:55:35.247639 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:55:35.247648 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:01:03.111) 0:01:40.253 ********* 2025-08-29 14:55:35.247665 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:35.247675 | orchestrator | 2025-08-29 14:55:35.247684 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:55:35.247694 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.818) 0:01:41.071 ********* 2025-08-29 14:55:35.247704 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:35.247713 | orchestrator | 2025-08-29 14:55:35.247730 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 14:55:35.247740 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.748) 0:01:41.820 ********* 2025-08-29 14:55:35.247750 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.247760 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.247770 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.247779 | orchestrator | 2025-08-29 14:55:35.247789 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 14:55:35.247798 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:01.026) 0:01:42.847 ********* 2025-08-29 14:55:35.247807 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.247817 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.247827 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.247837 | orchestrator | 2025-08-29 14:55:35.247846 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 14:55:35.247856 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.351) 0:01:43.198 ********* 2025-08-29 14:55:35.247870 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.247885 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.247900 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.247918 | orchestrator | 2025-08-29 14:55:35.247934 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 14:55:35.247951 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.366) 0:01:43.564 ********* 2025-08-29 14:55:35.247962 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.247971 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.247981 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.247990 | orchestrator | 2025-08-29 14:55:35.248000 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 14:55:35.248009 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.316) 0:01:43.881 ********* 2025-08-29 14:55:35.248019 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.248028 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.248038 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.248048 | orchestrator | 2025-08-29 14:55:35.248057 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 14:55:35.248067 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:00.632) 0:01:44.513 ********* 2025-08-29 14:55:35.248077 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248087 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248106 | orchestrator | 2025-08-29 14:55:35.248116 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 14:55:35.248243 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:00.316) 0:01:44.830 ********* 2025-08-29 14:55:35.248272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248282 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248291 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248301 | orchestrator | 2025-08-29 14:55:35.248311 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 14:55:35.248321 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.386) 0:01:45.217 ********* 2025-08-29 14:55:35.248330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248340 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248349 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248359 | orchestrator | 2025-08-29 14:55:35.248387 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 14:55:35.248397 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.380) 0:01:45.597 ********* 2025-08-29 14:55:35.248407 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248417 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248427 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248455 | orchestrator | 2025-08-29 14:55:35.248465 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 14:55:35.248475 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.525) 0:01:46.123 ********* 2025-08-29 14:55:35.248484 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248494 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248503 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248513 | orchestrator | 2025-08-29 14:55:35.248522 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 14:55:35.248532 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.320) 0:01:46.444 ********* 2025-08-29 14:55:35.248542 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248551 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248561 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248570 | orchestrator | 2025-08-29 14:55:35.248580 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 14:55:35.248590 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.296) 0:01:46.741 ********* 2025-08-29 14:55:35.248599 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248626 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248646 | orchestrator | 2025-08-29 14:55:35.248656 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 14:55:35.248665 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.332) 0:01:47.073 ********* 2025-08-29 14:55:35.248675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248685 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248695 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248704 | orchestrator | 2025-08-29 14:55:35.248714 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 14:55:35.248724 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.587) 0:01:47.661 ********* 2025-08-29 14:55:35.248733 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248742 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248759 | orchestrator | 2025-08-29 14:55:35.248767 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 14:55:35.248776 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.350) 0:01:48.011 ********* 2025-08-29 14:55:35.248785 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248794 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248802 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248811 | orchestrator | 2025-08-29 14:55:35.248831 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 14:55:35.248840 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.463) 0:01:48.475 ********* 2025-08-29 14:55:35.248849 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248858 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248867 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248875 | orchestrator | 2025-08-29 14:55:35.248884 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 14:55:35.248892 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.427) 0:01:48.903 ********* 2025-08-29 14:55:35.248901 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.248910 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.248919 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.248928 | orchestrator | 2025-08-29 14:55:35.248936 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:55:35.248951 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:01.028) 0:01:49.932 ********* 2025-08-29 14:55:35.248960 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:35.248969 | orchestrator | 2025-08-29 14:55:35.248977 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 14:55:35.248986 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:01.162) 0:01:51.095 ********* 2025-08-29 14:55:35.248994 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.249003 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.249012 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.249020 | orchestrator | 2025-08-29 14:55:35.249029 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 14:55:35.249038 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:00.659) 0:01:51.754 ********* 2025-08-29 14:55:35.249046 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.249055 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.249063 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.249072 | orchestrator | 2025-08-29 14:55:35.249081 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 14:55:35.249089 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:01.065) 0:01:52.820 ********* 2025-08-29 14:55:35.249098 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.249107 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.249115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.249141 | orchestrator | 2025-08-29 14:55:35.249150 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 14:55:35.249158 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:00.504) 0:01:53.324 ********* 2025-08-29 14:55:35.249167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.249176 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.249184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.249193 | orchestrator | 2025-08-29 14:55:35.249202 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 14:55:35.249211 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:00.800) 0:01:54.124 ********* 2025-08-29 14:55:35.249219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.249228 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.249237 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.249246 | orchestrator | 2025-08-29 14:55:35.249254 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 14:55:35.249263 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:00.451) 0:01:54.575 ********* 2025-08-29 14:55:35.249272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.249280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.249289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.249298 | orchestrator | 2025-08-29 14:55:35.249306 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 14:55:35.249315 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:00.828) 0:01:55.406 ********* 2025-08-29 14:55:35.249323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.249332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.249340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.249349 | orchestrator | 2025-08-29 14:55:35.249358 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 14:55:35.249366 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:00.403) 0:01:55.810 ********* 2025-08-29 14:55:35.249375 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.249383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.249392 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.249401 | orchestrator | 2025-08-29 14:55:35.249409 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:55:35.249418 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:00.317) 0:01:56.128 ********* 2025-08-29 14:55:35.249438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249534 | orchestrator | 2025-08-29 14:55:35.249543 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:55:35.249557 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:01.504) 0:01:57.632 ********* 2025-08-29 14:55:35.249570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249665 | orchestrator | 2025-08-29 14:55:35.249673 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:55:35.249682 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:04.480) 0:02:02.112 ********* 2025-08-29 14:55:35.249691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.249798 | orchestrator | 2025-08-29 14:55:35.249807 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:55:35.249816 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:02.254) 0:02:04.367 ********* 2025-08-29 14:55:35.249824 | orchestrator | 2025-08-29 14:55:35.249833 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:55:35.249842 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:00.076) 0:02:04.444 ********* 2025-08-29 14:55:35.249851 | orchestrator | 2025-08-29 14:55:35.249859 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:55:35.249868 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:00.071) 0:02:04.515 ********* 2025-08-29 14:55:35.249877 | orchestrator | 2025-08-29 14:55:35.249885 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:55:35.249894 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:00.069) 0:02:04.585 ********* 2025-08-29 14:55:35.249903 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.249911 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.249920 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.249929 | orchestrator | 2025-08-29 14:55:35.249938 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:55:35.249951 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:07.604) 0:02:12.189 ********* 2025-08-29 14:55:35.249960 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.249969 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.249978 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.249986 | orchestrator | 2025-08-29 14:55:35.249995 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:55:35.250004 | orchestrator | Friday 29 August 2025 14:54:45 +0000 (0:00:06.778) 0:02:18.968 ********* 2025-08-29 14:55:35.250012 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.250083 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.250092 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.250101 | orchestrator | 2025-08-29 14:55:35.250110 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:55:35.250119 | orchestrator | Friday 29 August 2025 14:54:53 +0000 (0:00:08.197) 0:02:27.166 ********* 2025-08-29 14:55:35.250145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.250154 | orchestrator | 2025-08-29 14:55:35.250162 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:55:35.250171 | orchestrator | Friday 29 August 2025 14:54:54 +0000 (0:00:00.137) 0:02:27.304 ********* 2025-08-29 14:55:35.250180 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.250189 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.250198 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.250206 | orchestrator | 2025-08-29 14:55:35.250222 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:55:35.250231 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:00.896) 0:02:28.201 ********* 2025-08-29 14:55:35.250240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.250248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.250257 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.250265 | orchestrator | 2025-08-29 14:55:35.250274 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:55:35.250283 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:00.677) 0:02:28.879 ********* 2025-08-29 14:55:35.250292 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.250301 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.250309 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.250318 | orchestrator | 2025-08-29 14:55:35.250327 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:55:35.250335 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:01.187) 0:02:30.066 ********* 2025-08-29 14:55:35.250351 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.250360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.250369 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.250378 | orchestrator | 2025-08-29 14:55:35.250387 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:55:35.250395 | orchestrator | Friday 29 August 2025 14:54:57 +0000 (0:00:00.622) 0:02:30.689 ********* 2025-08-29 14:55:35.250404 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.250413 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.250422 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.250430 | orchestrator | 2025-08-29 14:55:35.250439 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:55:35.250448 | orchestrator | Friday 29 August 2025 14:54:58 +0000 (0:00:00.785) 0:02:31.474 ********* 2025-08-29 14:55:35.250457 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.250465 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.250474 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.250483 | orchestrator | 2025-08-29 14:55:35.250491 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 14:55:35.250500 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:00.810) 0:02:32.284 ********* 2025-08-29 14:55:35.250509 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.250518 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.250526 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.250535 | orchestrator | 2025-08-29 14:55:35.250543 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:55:35.250552 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:00.539) 0:02:32.824 ********* 2025-08-29 14:55:35.250561 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250570 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250580 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250603 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250613 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250642 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250651 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250660 | orchestrator | 2025-08-29 14:55:35.250669 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:55:35.250678 | orchestrator | Friday 29 August 2025 14:55:01 +0000 (0:00:01.530) 0:02:34.354 ********* 2025-08-29 14:55:35.250687 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250696 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250705 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250714 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250801 | orchestrator | 2025-08-29 14:55:35.250810 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:55:35.250818 | orchestrator | Friday 29 August 2025 14:55:05 +0000 (0:00:03.846) 0:02:38.201 ********* 2025-08-29 14:55:35.250827 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250845 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250854 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:55:35.250928 | orchestrator | 2025-08-29 14:55:35.250937 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:55:35.250945 | orchestrator | Friday 29 August 2025 14:55:07 +0000 (0:00:02.967) 0:02:41.168 ********* 2025-08-29 14:55:35.250954 | orchestrator | 2025-08-29 14:55:35.250963 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:55:35.250972 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:00.143) 0:02:41.312 ********* 2025-08-29 14:55:35.250981 | orchestrator | 2025-08-29 14:55:35.250989 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:55:35.250998 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:00.320) 0:02:41.633 ********* 2025-08-29 14:55:35.251007 | orchestrator | 2025-08-29 14:55:35.251015 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:55:35.251024 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:00.078) 0:02:41.711 ********* 2025-08-29 14:55:35.251033 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.251041 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.251050 | orchestrator | 2025-08-29 14:55:35.251059 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:55:35.251067 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:06.334) 0:02:48.046 ********* 2025-08-29 14:55:35.251076 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.251085 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.251093 | orchestrator | 2025-08-29 14:55:35.251102 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:55:35.251111 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:06.236) 0:02:54.283 ********* 2025-08-29 14:55:35.251120 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:35.251156 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:35.251165 | orchestrator | 2025-08-29 14:55:35.251174 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:55:35.251183 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:06.191) 0:03:00.475 ********* 2025-08-29 14:55:35.251192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:35.251201 | orchestrator | 2025-08-29 14:55:35.251209 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:55:35.251218 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:00.149) 0:03:00.625 ********* 2025-08-29 14:55:35.251233 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.251242 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.251251 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.251260 | orchestrator | 2025-08-29 14:55:35.251268 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:55:35.251277 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:00.836) 0:03:01.461 ********* 2025-08-29 14:55:35.251286 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.251295 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.251303 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.251312 | orchestrator | 2025-08-29 14:55:35.251320 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:55:35.251329 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.757) 0:03:02.219 ********* 2025-08-29 14:55:35.251338 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.251347 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.251356 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.251364 | orchestrator | 2025-08-29 14:55:35.251373 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:55:35.251382 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.936) 0:03:03.156 ********* 2025-08-29 14:55:35.251390 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:35.251399 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:35.251407 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:35.251416 | orchestrator | 2025-08-29 14:55:35.251429 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:55:35.251438 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.888) 0:03:04.044 ********* 2025-08-29 14:55:35.251447 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.251456 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.251464 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.251473 | orchestrator | 2025-08-29 14:55:35.251482 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:55:35.251491 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.984) 0:03:05.029 ********* 2025-08-29 14:55:35.251500 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:35.251508 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:35.251517 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:35.251526 | orchestrator | 2025-08-29 14:55:35.251534 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:55:35.251544 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 14:55:35.251553 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:55:35.251568 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:55:35.251577 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:55:35.251586 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:55:35.251595 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:55:35.251603 | orchestrator | 2025-08-29 14:55:35.251612 | orchestrator | 2025-08-29 14:55:35.251621 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:55:35.251630 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:01.669) 0:03:06.698 ********* 2025-08-29 14:55:35.251638 | orchestrator | =============================================================================== 2025-08-29 14:55:35.251654 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 63.11s 2025-08-29 14:55:35.251663 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.75s 2025-08-29 14:55:35.251672 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.39s 2025-08-29 14:55:35.251680 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.94s 2025-08-29 14:55:35.251689 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.02s 2025-08-29 14:55:35.251697 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.48s 2025-08-29 14:55:35.251706 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.85s 2025-08-29 14:55:35.251715 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.97s 2025-08-29 14:55:35.251723 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.65s 2025-08-29 14:55:35.251732 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.61s 2025-08-29 14:55:35.251741 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.25s 2025-08-29 14:55:35.251749 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.99s 2025-08-29 14:55:35.251758 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.94s 2025-08-29 14:55:35.251767 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.84s 2025-08-29 14:55:35.251776 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.80s 2025-08-29 14:55:35.251784 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.67s 2025-08-29 14:55:35.251793 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.58s 2025-08-29 14:55:35.251801 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-08-29 14:55:35.251810 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2025-08-29 14:55:35.251819 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.47s 2025-08-29 14:55:35.251828 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task bf90a257-47be-47ec-a1be-ae3787058cff is in state SUCCESS 2025-08-29 14:55:35.251837 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:35.251846 | orchestrator | 2025-08-29 14:55:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:38.282516 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:38.283102 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:38.283182 | orchestrator | 2025-08-29 14:55:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:41.341667 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:41.343531 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:41.343570 | orchestrator | 2025-08-29 14:55:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:44.385429 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:44.385876 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:44.386259 | orchestrator | 2025-08-29 14:55:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:47.433017 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:47.435932 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:47.436027 | orchestrator | 2025-08-29 14:55:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:50.496240 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:50.496371 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:50.496388 | orchestrator | 2025-08-29 14:55:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:53.523101 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:53.523487 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:53.523514 | orchestrator | 2025-08-29 14:55:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:56.569197 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:56.572621 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:56.572715 | orchestrator | 2025-08-29 14:55:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:59.627291 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:55:59.630612 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:55:59.630666 | orchestrator | 2025-08-29 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:02.673485 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:02.678922 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:02.679009 | orchestrator | 2025-08-29 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:05.735591 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:05.735693 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:05.735708 | orchestrator | 2025-08-29 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:08.790639 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:08.790900 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:08.790992 | orchestrator | 2025-08-29 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:11.852777 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:11.854950 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:11.855066 | orchestrator | 2025-08-29 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:14.904449 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:14.907398 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:14.907935 | orchestrator | 2025-08-29 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:17.957311 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:17.962450 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:17.963256 | orchestrator | 2025-08-29 14:56:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:21.021135 | orchestrator | 2025-08-29 14:56:21 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:21.022733 | orchestrator | 2025-08-29 14:56:21 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:21.023004 | orchestrator | 2025-08-29 14:56:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:24.070930 | orchestrator | 2025-08-29 14:56:24 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:24.071537 | orchestrator | 2025-08-29 14:56:24 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:24.071702 | orchestrator | 2025-08-29 14:56:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:27.124583 | orchestrator | 2025-08-29 14:56:27 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:27.124809 | orchestrator | 2025-08-29 14:56:27 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:27.124832 | orchestrator | 2025-08-29 14:56:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:30.165820 | orchestrator | 2025-08-29 14:56:30 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:30.166908 | orchestrator | 2025-08-29 14:56:30 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:30.167536 | orchestrator | 2025-08-29 14:56:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:33.219547 | orchestrator | 2025-08-29 14:56:33 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:33.220747 | orchestrator | 2025-08-29 14:56:33 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:33.220962 | orchestrator | 2025-08-29 14:56:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:36.263462 | orchestrator | 2025-08-29 14:56:36 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:36.264180 | orchestrator | 2025-08-29 14:56:36 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:36.264506 | orchestrator | 2025-08-29 14:56:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:39.305094 | orchestrator | 2025-08-29 14:56:39 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:39.305498 | orchestrator | 2025-08-29 14:56:39 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:39.305525 | orchestrator | 2025-08-29 14:56:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:42.349925 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:42.351569 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:42.352229 | orchestrator | 2025-08-29 14:56:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:45.397781 | orchestrator | 2025-08-29 14:56:45 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:45.399943 | orchestrator | 2025-08-29 14:56:45 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:45.400025 | orchestrator | 2025-08-29 14:56:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:48.454574 | orchestrator | 2025-08-29 14:56:48 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:48.456654 | orchestrator | 2025-08-29 14:56:48 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:48.456741 | orchestrator | 2025-08-29 14:56:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:51.499602 | orchestrator | 2025-08-29 14:56:51 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:51.501747 | orchestrator | 2025-08-29 14:56:51 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:51.501821 | orchestrator | 2025-08-29 14:56:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:54.542781 | orchestrator | 2025-08-29 14:56:54 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:54.542901 | orchestrator | 2025-08-29 14:56:54 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:54.542917 | orchestrator | 2025-08-29 14:56:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:57.588603 | orchestrator | 2025-08-29 14:56:57 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:56:57.592462 | orchestrator | 2025-08-29 14:56:57 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:56:57.595126 | orchestrator | 2025-08-29 14:56:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:00.633129 | orchestrator | 2025-08-29 14:57:00 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:00.634486 | orchestrator | 2025-08-29 14:57:00 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:00.634518 | orchestrator | 2025-08-29 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:03.680445 | orchestrator | 2025-08-29 14:57:03 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:03.681925 | orchestrator | 2025-08-29 14:57:03 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:03.681977 | orchestrator | 2025-08-29 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:06.720401 | orchestrator | 2025-08-29 14:57:06 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:06.721913 | orchestrator | 2025-08-29 14:57:06 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:06.721951 | orchestrator | 2025-08-29 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:09.761471 | orchestrator | 2025-08-29 14:57:09 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:09.763796 | orchestrator | 2025-08-29 14:57:09 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:09.763969 | orchestrator | 2025-08-29 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:12.811445 | orchestrator | 2025-08-29 14:57:12 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:12.811854 | orchestrator | 2025-08-29 14:57:12 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:12.813575 | orchestrator | 2025-08-29 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:15.851889 | orchestrator | 2025-08-29 14:57:15 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:15.853657 | orchestrator | 2025-08-29 14:57:15 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:15.854934 | orchestrator | 2025-08-29 14:57:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:18.897857 | orchestrator | 2025-08-29 14:57:18 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:18.902485 | orchestrator | 2025-08-29 14:57:18 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:18.902552 | orchestrator | 2025-08-29 14:57:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:21.937356 | orchestrator | 2025-08-29 14:57:21 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:21.940234 | orchestrator | 2025-08-29 14:57:21 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:21.940279 | orchestrator | 2025-08-29 14:57:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:24.973697 | orchestrator | 2025-08-29 14:57:24 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:24.974729 | orchestrator | 2025-08-29 14:57:24 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:24.975259 | orchestrator | 2025-08-29 14:57:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:28.031330 | orchestrator | 2025-08-29 14:57:28 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:28.032961 | orchestrator | 2025-08-29 14:57:28 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:28.033011 | orchestrator | 2025-08-29 14:57:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:31.072241 | orchestrator | 2025-08-29 14:57:31 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:31.072291 | orchestrator | 2025-08-29 14:57:31 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:31.072297 | orchestrator | 2025-08-29 14:57:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:34.098339 | orchestrator | 2025-08-29 14:57:34 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:34.101172 | orchestrator | 2025-08-29 14:57:34 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:34.101230 | orchestrator | 2025-08-29 14:57:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:37.146931 | orchestrator | 2025-08-29 14:57:37 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:37.149236 | orchestrator | 2025-08-29 14:57:37 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:37.149314 | orchestrator | 2025-08-29 14:57:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:40.202487 | orchestrator | 2025-08-29 14:57:40 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:40.204680 | orchestrator | 2025-08-29 14:57:40 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:40.204830 | orchestrator | 2025-08-29 14:57:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:43.250424 | orchestrator | 2025-08-29 14:57:43 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:43.254890 | orchestrator | 2025-08-29 14:57:43 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:43.254978 | orchestrator | 2025-08-29 14:57:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:46.298696 | orchestrator | 2025-08-29 14:57:46 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:46.302137 | orchestrator | 2025-08-29 14:57:46 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:46.302185 | orchestrator | 2025-08-29 14:57:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:49.345840 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:49.347638 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:49.347684 | orchestrator | 2025-08-29 14:57:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:52.379943 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:52.384074 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:52.384125 | orchestrator | 2025-08-29 14:57:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:55.427901 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:55.430371 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:55.430420 | orchestrator | 2025-08-29 14:57:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:58.470290 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state STARTED 2025-08-29 14:57:58.472654 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:57:58.472710 | orchestrator | 2025-08-29 14:57:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:01.519358 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:01.523198 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task c0a31e81-defa-4d2a-a60b-a97581d6af6e is in state SUCCESS 2025-08-29 14:58:01.525130 | orchestrator | 2025-08-29 14:58:01.525172 | orchestrator | 2025-08-29 14:58:01.525186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:58:01.525198 | orchestrator | 2025-08-29 14:58:01.525210 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:58:01.525221 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:00.788) 0:00:00.788 ********* 2025-08-29 14:58:01.525233 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.525291 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.525313 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.525335 | orchestrator | 2025-08-29 14:58:01.525355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:58:01.525370 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:00.767) 0:00:01.556 ********* 2025-08-29 14:58:01.525382 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 14:58:01.525393 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 14:58:01.525404 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 14:58:01.525415 | orchestrator | 2025-08-29 14:58:01.525431 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 14:58:01.525442 | orchestrator | 2025-08-29 14:58:01.525453 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 14:58:01.525464 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:01.061) 0:00:02.617 ********* 2025-08-29 14:58:01.525476 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.525581 | orchestrator | 2025-08-29 14:58:01.525683 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 14:58:01.525718 | orchestrator | Friday 29 August 2025 14:51:05 +0000 (0:00:01.310) 0:00:03.928 ********* 2025-08-29 14:58:01.525730 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.525741 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.525751 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.525762 | orchestrator | 2025-08-29 14:58:01.525773 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 14:58:01.525784 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:01.116) 0:00:05.045 ********* 2025-08-29 14:58:01.525795 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.525806 | orchestrator | 2025-08-29 14:58:01.525816 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 14:58:01.525828 | orchestrator | Friday 29 August 2025 14:51:07 +0000 (0:00:01.317) 0:00:06.362 ********* 2025-08-29 14:58:01.525839 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.525849 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.525860 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.525871 | orchestrator | 2025-08-29 14:58:01.525882 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 14:58:01.525893 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:01.074) 0:00:07.436 ********* 2025-08-29 14:58:01.525904 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:58:01.525915 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:58:01.525926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:58:01.525936 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:58:01.525947 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:58:01.525958 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:58:01.525969 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:58:01.525980 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:58:01.525991 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:58:01.526135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:58:01.526148 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:58:01.526158 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:58:01.526169 | orchestrator | 2025-08-29 14:58:01.526180 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:58:01.526191 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:03.717) 0:00:11.154 ********* 2025-08-29 14:58:01.526201 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 14:58:01.526213 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 14:58:01.526224 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 14:58:01.526235 | orchestrator | 2025-08-29 14:58:01.526246 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:58:01.526257 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:01.560) 0:00:12.714 ********* 2025-08-29 14:58:01.526267 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 14:58:01.526278 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 14:58:01.526289 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 14:58:01.526337 | orchestrator | 2025-08-29 14:58:01.526405 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:58:01.526419 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:02.304) 0:00:15.018 ********* 2025-08-29 14:58:01.526468 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 14:58:01.526481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.526573 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 14:58:01.526588 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.526599 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 14:58:01.526610 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.526621 | orchestrator | 2025-08-29 14:58:01.526631 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 14:58:01.526642 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:02.190) 0:00:17.208 ********* 2025-08-29 14:58:01.526662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.526681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.526694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.526705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.526718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.526747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.526901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.526919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.526931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.526943 | orchestrator | 2025-08-29 14:58:01.526955 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 14:58:01.526966 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:04.008) 0:00:21.217 ********* 2025-08-29 14:58:01.526977 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.526988 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.527025 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.527037 | orchestrator | 2025-08-29 14:58:01.527048 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 14:58:01.527059 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:01.715) 0:00:22.932 ********* 2025-08-29 14:58:01.527070 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 14:58:01.527081 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 14:58:01.527092 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 14:58:01.527103 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 14:58:01.527113 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 14:58:01.527124 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 14:58:01.527270 | orchestrator | 2025-08-29 14:58:01.527290 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 14:58:01.527310 | orchestrator | Friday 29 August 2025 14:51:26 +0000 (0:00:02.717) 0:00:25.649 ********* 2025-08-29 14:58:01.527381 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.527392 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.527402 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.527413 | orchestrator | 2025-08-29 14:58:01.527424 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 14:58:01.527444 | orchestrator | Friday 29 August 2025 14:51:28 +0000 (0:00:01.653) 0:00:27.303 ********* 2025-08-29 14:58:01.527455 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.527481 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.527493 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.527504 | orchestrator | 2025-08-29 14:58:01.527515 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 14:58:01.527534 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:01.304) 0:00:28.607 ********* 2025-08-29 14:58:01.527555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.527743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.527767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.527780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:58:01.527791 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.527803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.527815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.527835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.527854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:58:01.527866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.527878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.527894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.527906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.527917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:58:01.527936 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.527947 | orchestrator | 2025-08-29 14:58:01.527958 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 14:58:01.528128 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.974) 0:00:29.582 ********* 2025-08-29 14:58:01.528145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.528237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.528249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:58:01.528272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:58:01.528307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.528345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2', '__omit_place_holder__d1c1ac66ccf6b60922ab8a918eaff1f7e26db2f2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:58:01.528378 | orchestrator | 2025-08-29 14:58:01.528517 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 14:58:01.528678 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:04.643) 0:00:34.225 ********* 2025-08-29 14:58:01.528692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.528820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.528832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.528844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.528855 | orchestrator | 2025-08-29 14:58:01.528866 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 14:58:01.528902 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:03.885) 0:00:38.110 ********* 2025-08-29 14:58:01.528914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:58:01.528933 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:58:01.529184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:58:01.529204 | orchestrator | 2025-08-29 14:58:01.529216 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 14:58:01.529227 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:02.476) 0:00:40.587 ********* 2025-08-29 14:58:01.529238 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:58:01.529301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:58:01.529313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:58:01.529324 | orchestrator | 2025-08-29 14:58:01.529335 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 14:58:01.529352 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:04.039) 0:00:44.627 ********* 2025-08-29 14:58:01.529363 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.529383 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.529394 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.529405 | orchestrator | 2025-08-29 14:58:01.529416 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 14:58:01.529427 | orchestrator | Friday 29 August 2025 14:51:46 +0000 (0:00:00.681) 0:00:45.308 ********* 2025-08-29 14:58:01.529438 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:58:01.529450 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:58:01.529461 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:58:01.529472 | orchestrator | 2025-08-29 14:58:01.529482 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 14:58:01.529554 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:02.728) 0:00:48.037 ********* 2025-08-29 14:58:01.529565 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:58:01.529576 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:58:01.529587 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:58:01.529596 | orchestrator | 2025-08-29 14:58:01.529606 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 14:58:01.529615 | orchestrator | Friday 29 August 2025 14:51:53 +0000 (0:00:03.758) 0:00:51.796 ********* 2025-08-29 14:58:01.529625 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 14:58:01.529635 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 14:58:01.529645 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 14:58:01.529657 | orchestrator | 2025-08-29 14:58:01.529673 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 14:58:01.529688 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:01.849) 0:00:53.646 ********* 2025-08-29 14:58:01.529704 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 14:58:01.529721 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 14:58:01.529755 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 14:58:01.529766 | orchestrator | 2025-08-29 14:58:01.529776 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 14:58:01.529786 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:01.860) 0:00:55.506 ********* 2025-08-29 14:58:01.529795 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.529805 | orchestrator | 2025-08-29 14:58:01.529816 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 14:58:01.529832 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:01.265) 0:00:56.772 ********* 2025-08-29 14:58:01.529849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.529880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.529916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.529935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.529953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.529972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.529989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.530061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.530090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.530100 | orchestrator | 2025-08-29 14:58:01.530111 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 14:58:01.530120 | orchestrator | Friday 29 August 2025 14:52:01 +0000 (0:00:03.643) 0:01:00.415 ********* 2025-08-29 14:58:01.530136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.530177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530221 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.530236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.530277 | orchestrator | 2025-08-29 14:58:01.530287 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 14:58:01.530297 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:00.861) 0:01:01.277 ********* 2025-08-29 14:58:01.530307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530350 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.530360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530395 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.530405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.530452 | orchestrator | 2025-08-29 14:58:01.530462 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 14:58:01.530472 | orchestrator | Friday 29 August 2025 14:52:04 +0000 (0:00:01.864) 0:01:03.141 ********* 2025-08-29 14:58:01.530488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530527 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.530537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530574 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.530590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530625 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.530635 | orchestrator | 2025-08-29 14:58:01.530645 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 14:58:01.530655 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:00.749) 0:01:03.891 ********* 2025-08-29 14:58:01.530665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530712 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.530731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.530802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.530838 | orchestrator | 2025-08-29 14:58:01.530848 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 14:58:01.530858 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:00.737) 0:01:04.628 ********* 2025-08-29 14:58:01.530868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530920 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.530930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.530955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.530965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.530980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.530991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531022 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.531033 | orchestrator | 2025-08-29 14:58:01.531043 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 14:58:01.531053 | orchestrator | Friday 29 August 2025 14:52:07 +0000 (0:00:01.511) 0:01:06.140 ********* 2025-08-29 14:58:01.531063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.531110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531172 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.531186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531223 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.531233 | orchestrator | 2025-08-29 14:58:01.531243 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 14:58:01.531253 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:00.698) 0:01:06.838 ********* 2025-08-29 14:58:01.531263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531299 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.531313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531360 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.531371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531391 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.531401 | orchestrator | 2025-08-29 14:58:01.531411 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 14:58:01.531425 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:00.690) 0:01:07.529 ********* 2025-08-29 14:58:01.531436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531479 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.531489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531520 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.531540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:58:01.531563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:58:01.531592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:58:01.531611 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.531622 | orchestrator | 2025-08-29 14:58:01.531632 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 14:58:01.531641 | orchestrator | Friday 29 August 2025 14:52:10 +0000 (0:00:01.605) 0:01:09.134 ********* 2025-08-29 14:58:01.531651 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:58:01.531663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:58:01.531673 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:58:01.531683 | orchestrator | 2025-08-29 14:58:01.531693 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 14:58:01.531702 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:01.789) 0:01:10.924 ********* 2025-08-29 14:58:01.531712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:58:01.531722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:58:01.531732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:58:01.531742 | orchestrator | 2025-08-29 14:58:01.531752 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 14:58:01.531761 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:01.990) 0:01:12.914 ********* 2025-08-29 14:58:01.531771 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:58:01.531782 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:58:01.531791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:58:01.531801 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:58:01.531811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.531820 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:58:01.531830 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.531840 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:58:01.531850 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.531860 | orchestrator | 2025-08-29 14:58:01.531869 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 14:58:01.531879 | orchestrator | Friday 29 August 2025 14:52:16 +0000 (0:00:02.478) 0:01:15.393 ********* 2025-08-29 14:58:01.531901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.531916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.531927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:58:01.531937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.531948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.531958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:58:01.531973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.531990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.532020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:58:01.532031 | orchestrator | 2025-08-29 14:58:01.532041 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 14:58:01.532051 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:02.746) 0:01:18.139 ********* 2025-08-29 14:58:01.532061 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.532071 | orchestrator | 2025-08-29 14:58:01.532081 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 14:58:01.532090 | orchestrator | Friday 29 August 2025 14:52:20 +0000 (0:00:00.650) 0:01:18.789 ********* 2025-08-29 14:58:01.532102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:58:01.532113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.532124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:58:01.532172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.532183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:58:01.532219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.532235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532268 | orchestrator | 2025-08-29 14:58:01.532278 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 14:58:01.532288 | orchestrator | Friday 29 August 2025 14:52:24 +0000 (0:00:04.272) 0:01:23.062 ********* 2025-08-29 14:58:01.532298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:58:01.532308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.532324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532344 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.532361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:58:01.532375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.532386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:58:01.532412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532422 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.532433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.532448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.532473 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.532483 | orchestrator | 2025-08-29 14:58:01.532493 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 14:58:01.532503 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:00.726) 0:01:23.789 ********* 2025-08-29 14:58:01.532513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:58:01.532524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:58:01.532534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.532545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:58:01.532554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:58:01.532573 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.532583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:58:01.532593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:58:01.532603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.532613 | orchestrator | 2025-08-29 14:58:01.532623 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 14:58:01.532633 | orchestrator | Friday 29 August 2025 14:52:26 +0000 (0:00:00.993) 0:01:24.782 ********* 2025-08-29 14:58:01.532643 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.532652 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.532662 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.532672 | orchestrator | 2025-08-29 14:58:01.532682 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 14:58:01.532691 | orchestrator | Friday 29 August 2025 14:52:27 +0000 (0:00:01.589) 0:01:26.371 ********* 2025-08-29 14:58:01.532701 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.532711 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.532720 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.532730 | orchestrator | 2025-08-29 14:58:01.532739 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 14:58:01.532749 | orchestrator | Friday 29 August 2025 14:52:29 +0000 (0:00:02.338) 0:01:28.710 ********* 2025-08-29 14:58:01.532759 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.532769 | orchestrator | 2025-08-29 14:58:01.532778 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 14:58:01.532788 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:00.974) 0:01:29.685 ********* 2025-08-29 14:58:01.534789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.534848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.534861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.534882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.534891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.534899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.534930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.534942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.534956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.534965 | orchestrator | 2025-08-29 14:58:01.534975 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 14:58:01.534983 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:05.642) 0:01:35.328 ********* 2025-08-29 14:58:01.535009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.535020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.535034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.535043 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.535068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.535077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.535086 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.535116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.535125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.535133 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535146 | orchestrator | 2025-08-29 14:58:01.535157 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 14:58:01.535166 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.970) 0:01:36.298 ********* 2025-08-29 14:58:01.535175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:58:01.535183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:58:01.535192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:58:01.535200 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:58:01.535217 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:58:01.535233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:58:01.535241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535248 | orchestrator | 2025-08-29 14:58:01.535256 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 14:58:01.535264 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:01.372) 0:01:37.671 ********* 2025-08-29 14:58:01.535272 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.535280 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.535288 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.535295 | orchestrator | 2025-08-29 14:58:01.535304 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 14:58:01.535312 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:01.541) 0:01:39.212 ********* 2025-08-29 14:58:01.535319 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.535327 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.535335 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.535350 | orchestrator | 2025-08-29 14:58:01.535358 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 14:58:01.535405 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:02.203) 0:01:41.416 ********* 2025-08-29 14:58:01.535413 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535430 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535437 | orchestrator | 2025-08-29 14:58:01.535454 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 14:58:01.535463 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:01.614) 0:01:43.031 ********* 2025-08-29 14:58:01.535471 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.535478 | orchestrator | 2025-08-29 14:58:01.535486 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 14:58:01.535494 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:00.963) 0:01:43.994 ********* 2025-08-29 14:58:01.535518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:58:01.535538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:58:01.535546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:58:01.535555 | orchestrator | 2025-08-29 14:58:01.535563 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 14:58:01.535571 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:02.791) 0:01:46.786 ********* 2025-08-29 14:58:01.535579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:58:01.535587 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:58:01.535608 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:58:01.535638 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535646 | orchestrator | 2025-08-29 14:58:01.535654 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 14:58:01.535666 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:02.141) 0:01:48.928 ********* 2025-08-29 14:58:01.535675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:58:01.535684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:58:01.535694 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:58:01.535710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:58:01.535719 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:58:01.535755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:58:01.535779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535792 | orchestrator | 2025-08-29 14:58:01.535804 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 14:58:01.535817 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:01.783) 0:01:50.711 ********* 2025-08-29 14:58:01.535828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535840 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535854 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535867 | orchestrator | 2025-08-29 14:58:01.535880 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 14:58:01.535893 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:00.426) 0:01:51.137 ********* 2025-08-29 14:58:01.535905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.535919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.535933 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.535964 | orchestrator | 2025-08-29 14:58:01.535981 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 14:58:01.536056 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:01.685) 0:01:52.823 ********* 2025-08-29 14:58:01.536067 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.536075 | orchestrator | 2025-08-29 14:58:01.536083 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 14:58:01.536091 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:01.000) 0:01:53.823 ********* 2025-08-29 14:58:01.536106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.536117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.536179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.536218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536259 | orchestrator | 2025-08-29 14:58:01.536268 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 14:58:01.536276 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:03.939) 0:01:57.763 ********* 2025-08-29 14:58:01.536285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.536311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536374 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.536392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.536406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536458 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.536493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.536505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.536576 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.536584 | orchestrator | 2025-08-29 14:58:01.536592 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 14:58:01.536600 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:00.722) 0:01:58.485 ********* 2025-08-29 14:58:01.536619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:58:01.536628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:58:01.536637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.536645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:58:01.536653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:58:01.536661 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.536690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:58:01.536700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:58:01.536709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.536717 | orchestrator | 2025-08-29 14:58:01.536724 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 14:58:01.536732 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:01.425) 0:01:59.911 ********* 2025-08-29 14:58:01.536740 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.536748 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.536756 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.536764 | orchestrator | 2025-08-29 14:58:01.536772 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 14:58:01.536780 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:01.483) 0:02:01.395 ********* 2025-08-29 14:58:01.536792 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.536802 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.536818 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.536837 | orchestrator | 2025-08-29 14:58:01.536859 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 14:58:01.536873 | orchestrator | Friday 29 August 2025 14:53:04 +0000 (0:00:02.165) 0:02:03.560 ********* 2025-08-29 14:58:01.536884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.536896 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.536909 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.536923 | orchestrator | 2025-08-29 14:58:01.536937 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 14:58:01.536950 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:00.399) 0:02:03.959 ********* 2025-08-29 14:58:01.536964 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.536978 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.536991 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.537021 | orchestrator | 2025-08-29 14:58:01.537034 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 14:58:01.537048 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:00.610) 0:02:04.570 ********* 2025-08-29 14:58:01.537063 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.537077 | orchestrator | 2025-08-29 14:58:01.537091 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 14:58:01.537123 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:00.778) 0:02:05.349 ********* 2025-08-29 14:58:01.537137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:58:01.537153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:58:01.537169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:58:01.537294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:58:01.537326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:58:01.537370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:58:01.537443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537546 | orchestrator | 2025-08-29 14:58:01.537560 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 14:58:01.537574 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:04.964) 0:02:10.314 ********* 2025-08-29 14:58:01.537605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:58:01.537632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:58:01.537646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:58:01.537708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:58:01.537772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.537896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.537917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:58:01.537927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:58:01.537946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.537965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.538089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.538128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.538149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.538163 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.538176 | orchestrator | 2025-08-29 14:58:01.538190 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 14:58:01.538203 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:02.095) 0:02:12.409 ********* 2025-08-29 14:58:01.538217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:58:01.538230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:58:01.538265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.538280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:58:01.538293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:58:01.538307 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.538315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:58:01.538324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:58:01.538332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.538340 | orchestrator | 2025-08-29 14:58:01.538347 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 14:58:01.538356 | orchestrator | Friday 29 August 2025 14:53:15 +0000 (0:00:01.397) 0:02:13.807 ********* 2025-08-29 14:58:01.538373 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.538381 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.538389 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.538397 | orchestrator | 2025-08-29 14:58:01.538405 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 14:58:01.538413 | orchestrator | Friday 29 August 2025 14:53:16 +0000 (0:00:01.500) 0:02:15.307 ********* 2025-08-29 14:58:01.538421 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.538429 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.538437 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.538445 | orchestrator | 2025-08-29 14:58:01.538453 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 14:58:01.538461 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:02.221) 0:02:17.528 ********* 2025-08-29 14:58:01.538469 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.538477 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.538484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.538492 | orchestrator | 2025-08-29 14:58:01.538500 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 14:58:01.538508 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.682) 0:02:18.211 ********* 2025-08-29 14:58:01.538516 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.538524 | orchestrator | 2025-08-29 14:58:01.538532 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 14:58:01.538540 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:01.053) 0:02:19.264 ********* 2025-08-29 14:58:01.538581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:58:01.538607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.538648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:58:01.538664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.538693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:58:01.538719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.538743 | orchestrator | 2025-08-29 14:58:01.538755 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 14:58:01.538764 | orchestrator | Friday 29 August 2025 14:53:25 +0000 (0:00:04.680) 0:02:23.944 ********* 2025-08-29 14:58:01.538788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:58:01.538802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:58:01.538817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.538838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.538849 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.538862 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.538873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:58:01.538899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.538911 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.538920 | orchestrator | 2025-08-29 14:58:01.538929 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 14:58:01.538938 | orchestrator | Friday 29 August 2025 14:53:29 +0000 (0:00:04.638) 0:02:28.583 ********* 2025-08-29 14:58:01.538952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:58:01.538962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:58:01.538972 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.538981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:58:01.538991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:58:01.539023 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.539032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:58:01.539056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:58:01.539066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.539075 | orchestrator | 2025-08-29 14:58:01.539084 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 14:58:01.539093 | orchestrator | Friday 29 August 2025 14:53:34 +0000 (0:00:04.517) 0:02:33.101 ********* 2025-08-29 14:58:01.539102 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.539111 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.539120 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.539129 | orchestrator | 2025-08-29 14:58:01.539141 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 14:58:01.539151 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:01.537) 0:02:34.638 ********* 2025-08-29 14:58:01.539181 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.539191 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.539200 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.539220 | orchestrator | 2025-08-29 14:58:01.539229 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 14:58:01.539237 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:03.205) 0:02:37.844 ********* 2025-08-29 14:58:01.539244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.539252 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.539260 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.539268 | orchestrator | 2025-08-29 14:58:01.539276 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 14:58:01.539283 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:00.718) 0:02:38.562 ********* 2025-08-29 14:58:01.539291 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.539299 | orchestrator | 2025-08-29 14:58:01.539307 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 14:58:01.539315 | orchestrator | Friday 29 August 2025 14:53:41 +0000 (0:00:01.215) 0:02:39.778 ********* 2025-08-29 14:58:01.539323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:58:01.539332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:58:01.539340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:58:01.539349 | orchestrator | 2025-08-29 14:58:01.539357 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 14:58:01.539365 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:03.798) 0:02:43.576 ********* 2025-08-29 14:58:01.539385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:58:01.539403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:58:01.539412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.539420 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.539429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:58:01.539437 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.539445 | orchestrator | 2025-08-29 14:58:01.539453 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 14:58:01.539461 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:00.717) 0:02:44.293 ********* 2025-08-29 14:58:01.539469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:58:01.539477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:58:01.539485 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.539493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:58:01.539501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:58:01.539509 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.539517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:58:01.539525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:58:01.539532 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.539540 | orchestrator | 2025-08-29 14:58:01.539548 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 14:58:01.539556 | orchestrator | Friday 29 August 2025 14:53:46 +0000 (0:00:00.697) 0:02:44.991 ********* 2025-08-29 14:58:01.539568 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.539576 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.539584 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.539592 | orchestrator | 2025-08-29 14:58:01.539600 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 14:58:01.539607 | orchestrator | Friday 29 August 2025 14:53:47 +0000 (0:00:01.359) 0:02:46.350 ********* 2025-08-29 14:58:01.539615 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.539623 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.539631 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.539639 | orchestrator | 2025-08-29 14:58:01.539658 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 14:58:01.539667 | orchestrator | Friday 29 August 2025 14:53:49 +0000 (0:00:02.314) 0:02:48.665 ********* 2025-08-29 14:58:01.539675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.539683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.539691 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.539699 | orchestrator | 2025-08-29 14:58:01.539707 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 14:58:01.539715 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:00.627) 0:02:49.292 ********* 2025-08-29 14:58:01.539722 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.539730 | orchestrator | 2025-08-29 14:58:01.539738 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 14:58:01.539746 | orchestrator | Friday 29 August 2025 14:53:51 +0000 (0:00:01.086) 0:02:50.379 ********* 2025-08-29 14:58:01.539780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:58:01.539821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:58:01.539851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:58:01.539874 | orchestrator | 2025-08-29 14:58:01.539889 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 14:58:01.539902 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:03.937) 0:02:54.316 ********* 2025-08-29 14:58:01.539937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:58:01.539948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.539956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:58:01.539970 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.540008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:58:01.540019 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.540027 | orchestrator | 2025-08-29 14:58:01.540035 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 14:58:01.540043 | orchestrator | Friday 29 August 2025 14:53:56 +0000 (0:00:01.046) 0:02:55.362 ********* 2025-08-29 14:58:01.540051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:58:01.540060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:58:01.540069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:58:01.540083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:58:01.540091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:58:01.540100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:58:01.540108 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.540116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:58:01.540136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:58:01.540145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:58:01.540169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:58:01.540178 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.540186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:58:01.540194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:58:01.540202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:58:01.540210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:58:01.540228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:58:01.540236 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.540244 | orchestrator | 2025-08-29 14:58:01.540252 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 14:58:01.540260 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:01.102) 0:02:56.465 ********* 2025-08-29 14:58:01.540268 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.540284 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.540293 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.540300 | orchestrator | 2025-08-29 14:58:01.540308 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 14:58:01.540316 | orchestrator | Friday 29 August 2025 14:53:59 +0000 (0:00:01.321) 0:02:57.787 ********* 2025-08-29 14:58:01.540324 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.540332 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.540340 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.540348 | orchestrator | 2025-08-29 14:58:01.540356 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 14:58:01.540363 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:02.266) 0:03:00.054 ********* 2025-08-29 14:58:01.540371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.540379 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.540387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.540395 | orchestrator | 2025-08-29 14:58:01.540402 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 14:58:01.540410 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:00.599) 0:03:00.653 ********* 2025-08-29 14:58:01.540418 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.540426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.540434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.540441 | orchestrator | 2025-08-29 14:58:01.540449 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 14:58:01.540457 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:00.353) 0:03:01.007 ********* 2025-08-29 14:58:01.540465 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.540472 | orchestrator | 2025-08-29 14:58:01.540480 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 14:58:01.540488 | orchestrator | Friday 29 August 2025 14:54:03 +0000 (0:00:01.087) 0:03:02.094 ********* 2025-08-29 14:58:01.540514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:58:01.540524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:58:01.540539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:58:01.540548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:58:01.540557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:58:01.540569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:58:01.540595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:58:01.540609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:58:01.540618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:58:01.540626 | orchestrator | 2025-08-29 14:58:01.540635 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 14:58:01.540643 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:03.812) 0:03:05.907 ********* 2025-08-29 14:58:01.540651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:58:01.540672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:58:01.540684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:58:01.540697 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.540706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:58:01.540715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:58:01.540723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:58:01.540732 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.540745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:58:01.540757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:58:01.540774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:58:01.540782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.540791 | orchestrator | 2025-08-29 14:58:01.540799 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 14:58:01.540807 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.830) 0:03:06.738 ********* 2025-08-29 14:58:01.540815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:58:01.540823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:58:01.540832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.540840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:58:01.540848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:58:01.540856 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.540864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:58:01.540873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:58:01.540881 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.540889 | orchestrator | 2025-08-29 14:58:01.540897 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 14:58:01.540905 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.913) 0:03:07.651 ********* 2025-08-29 14:58:01.540925 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.540933 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.540941 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.540949 | orchestrator | 2025-08-29 14:58:01.540960 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 14:58:01.540975 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:01.620) 0:03:09.272 ********* 2025-08-29 14:58:01.541041 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.541058 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.541097 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.541111 | orchestrator | 2025-08-29 14:58:01.541124 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 14:58:01.541159 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:02.196) 0:03:11.469 ********* 2025-08-29 14:58:01.541173 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.541186 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.541199 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.541213 | orchestrator | 2025-08-29 14:58:01.541225 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 14:58:01.541239 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.346) 0:03:11.816 ********* 2025-08-29 14:58:01.541254 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.541263 | orchestrator | 2025-08-29 14:58:01.541271 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 14:58:01.541279 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:01.073) 0:03:12.890 ********* 2025-08-29 14:58:01.541293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:58:01.541303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:58:01.541320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:58:01.541364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541373 | orchestrator | 2025-08-29 14:58:01.541381 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 14:58:01.541389 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:04.874) 0:03:17.765 ********* 2025-08-29 14:58:01.541398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:58:01.541406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.541439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:58:01.541452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.541469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:58:01.541477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541485 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.541491 | orchestrator | 2025-08-29 14:58:01.541498 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 14:58:01.541505 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.913) 0:03:18.678 ********* 2025-08-29 14:58:01.541512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:58:01.541523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:58:01.541530 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.541537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:58:01.541544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:58:01.541550 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.541557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:58:01.541564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:58:01.541580 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.541588 | orchestrator | 2025-08-29 14:58:01.541594 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 14:58:01.541601 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:01.500) 0:03:20.178 ********* 2025-08-29 14:58:01.541608 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.541615 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.541621 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.541628 | orchestrator | 2025-08-29 14:58:01.541634 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 14:58:01.541641 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:02.031) 0:03:22.210 ********* 2025-08-29 14:58:01.541648 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.541654 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.541661 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.541679 | orchestrator | 2025-08-29 14:58:01.541686 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 14:58:01.541693 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:02.490) 0:03:24.700 ********* 2025-08-29 14:58:01.541703 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.541710 | orchestrator | 2025-08-29 14:58:01.541717 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 14:58:01.541724 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:01.149) 0:03:25.850 ********* 2025-08-29 14:58:01.541731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:58:01.541738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:58:01.541786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:58:01.541821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541853 | orchestrator | 2025-08-29 14:58:01.541861 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 14:58:01.541868 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:04.233) 0:03:30.083 ********* 2025-08-29 14:58:01.541876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:58:01.541888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541921 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.541943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:58:01.541952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.541981 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.541988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:58:01.542226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.542265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.542282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.542298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542303 | orchestrator | 2025-08-29 14:58:01.542307 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 14:58:01.542312 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:01.188) 0:03:31.272 ********* 2025-08-29 14:58:01.542316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:58:01.542322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:58:01.542327 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:58:01.542335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:58:01.542339 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:58:01.542346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:58:01.542350 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542354 | orchestrator | 2025-08-29 14:58:01.542358 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 14:58:01.542362 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.964) 0:03:32.237 ********* 2025-08-29 14:58:01.542366 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.542369 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.542373 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.542377 | orchestrator | 2025-08-29 14:58:01.542381 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 14:58:01.542385 | orchestrator | Friday 29 August 2025 14:54:34 +0000 (0:00:01.421) 0:03:33.658 ********* 2025-08-29 14:58:01.542388 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.542392 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.542396 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.542400 | orchestrator | 2025-08-29 14:58:01.542403 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 14:58:01.542407 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:02.332) 0:03:35.990 ********* 2025-08-29 14:58:01.542411 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.542415 | orchestrator | 2025-08-29 14:58:01.542419 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 14:58:01.542422 | orchestrator | Friday 29 August 2025 14:54:38 +0000 (0:00:01.430) 0:03:37.421 ********* 2025-08-29 14:58:01.542426 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 14:58:01.542430 | orchestrator | 2025-08-29 14:58:01.542434 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 14:58:01.542438 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:03.378) 0:03:40.799 ********* 2025-08-29 14:58:01.542457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:58:01.542465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:58:01.542470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:58:01.542486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:58:01.542490 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:58:01.542529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:58:01.542533 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542537 | orchestrator | 2025-08-29 14:58:01.542541 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 14:58:01.542545 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:02.682) 0:03:43.482 ********* 2025-08-29 14:58:01.542561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:58:01.542568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:58:01.542572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:58:01.542585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:58:01.542592 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:58:01.542603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:58:01.542607 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542610 | orchestrator | 2025-08-29 14:58:01.542614 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 14:58:01.542618 | orchestrator | Friday 29 August 2025 14:54:47 +0000 (0:00:03.176) 0:03:46.658 ********* 2025-08-29 14:58:01.542622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:58:01.542636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:58:01.542641 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:58:01.542651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:58:01.542655 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:58:01.542663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:58:01.542667 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542670 | orchestrator | 2025-08-29 14:58:01.542674 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 14:58:01.542678 | orchestrator | Friday 29 August 2025 14:54:50 +0000 (0:00:02.478) 0:03:49.136 ********* 2025-08-29 14:58:01.542682 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.542686 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.542689 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.542693 | orchestrator | 2025-08-29 14:58:01.542697 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 14:58:01.542701 | orchestrator | Friday 29 August 2025 14:54:52 +0000 (0:00:02.222) 0:03:51.359 ********* 2025-08-29 14:58:01.542705 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542719 | orchestrator | 2025-08-29 14:58:01.542723 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 14:58:01.542727 | orchestrator | Friday 29 August 2025 14:54:54 +0000 (0:00:01.980) 0:03:53.339 ********* 2025-08-29 14:58:01.542730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542742 | orchestrator | 2025-08-29 14:58:01.542745 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 14:58:01.542749 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:00.775) 0:03:54.115 ********* 2025-08-29 14:58:01.542753 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.542757 | orchestrator | 2025-08-29 14:58:01.542761 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 14:58:01.542764 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:01.189) 0:03:55.305 ********* 2025-08-29 14:58:01.542776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:58:01.542784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:58:01.542788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:58:01.542792 | orchestrator | 2025-08-29 14:58:01.542796 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 14:58:01.542799 | orchestrator | Friday 29 August 2025 14:54:58 +0000 (0:00:01.590) 0:03:56.895 ********* 2025-08-29 14:58:01.542803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:58:01.542813 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:58:01.542820 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:58:01.542837 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542841 | orchestrator | 2025-08-29 14:58:01.542845 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 14:58:01.542857 | orchestrator | Friday 29 August 2025 14:54:58 +0000 (0:00:00.768) 0:03:57.664 ********* 2025-08-29 14:58:01.542863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:58:01.542868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:58:01.542872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:58:01.542884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542887 | orchestrator | 2025-08-29 14:58:01.542891 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 14:58:01.542895 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:00.697) 0:03:58.361 ********* 2025-08-29 14:58:01.542899 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542902 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542916 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542921 | orchestrator | 2025-08-29 14:58:01.542924 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 14:58:01.542928 | orchestrator | Friday 29 August 2025 14:55:00 +0000 (0:00:00.502) 0:03:58.864 ********* 2025-08-29 14:58:01.542932 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542936 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542939 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542943 | orchestrator | 2025-08-29 14:58:01.542947 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 14:58:01.542951 | orchestrator | Friday 29 August 2025 14:55:01 +0000 (0:00:01.617) 0:04:00.481 ********* 2025-08-29 14:58:01.542954 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.542958 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.542962 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.542966 | orchestrator | 2025-08-29 14:58:01.542969 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 14:58:01.542973 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:00.710) 0:04:01.192 ********* 2025-08-29 14:58:01.542977 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.542981 | orchestrator | 2025-08-29 14:58:01.542985 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 14:58:01.542988 | orchestrator | Friday 29 August 2025 14:55:04 +0000 (0:00:01.694) 0:04:02.887 ********* 2025-08-29 14:58:01.543005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:58:01.543017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:58:01.543042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:58:01.543072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:58:01.543139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.543146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:58:01.543194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.543227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:58:01.543264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.543383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543392 | orchestrator | 2025-08-29 14:58:01.543396 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 14:58:01.543400 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:05.206) 0:04:08.094 ********* 2025-08-29 14:58:01.543404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:58:01.543408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:58:01.543437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name2025-08-29 14:58:01 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:01.543476 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:01.543480 | orchestrator | 2025-08-29 14:58:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:01.543487 | orchestrator | ': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:58:01.543499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.543555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:58:01.543562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543574 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.543578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:58:01.543607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:58:01.543669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.543676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543695 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.543699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:58:01.543735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:58:01.543773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:58:01.543787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.543793 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.543799 | orchestrator | 2025-08-29 14:58:01.543805 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 14:58:01.543810 | orchestrator | Friday 29 August 2025 14:55:11 +0000 (0:00:01.980) 0:04:10.075 ********* 2025-08-29 14:58:01.543817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:58:01.543825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:58:01.543832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.543838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:58:01.543844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:58:01.543866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.543872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:58:01.543879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:58:01.543885 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.543891 | orchestrator | 2025-08-29 14:58:01.543898 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 14:58:01.543905 | orchestrator | Friday 29 August 2025 14:55:13 +0000 (0:00:01.697) 0:04:11.772 ********* 2025-08-29 14:58:01.543920 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.543926 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.543933 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.543940 | orchestrator | 2025-08-29 14:58:01.543946 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 14:58:01.543950 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:01.299) 0:04:13.071 ********* 2025-08-29 14:58:01.543953 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.543957 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.543961 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.543965 | orchestrator | 2025-08-29 14:58:01.543968 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 14:58:01.543972 | orchestrator | Friday 29 August 2025 14:55:16 +0000 (0:00:02.292) 0:04:15.364 ********* 2025-08-29 14:58:01.543976 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.543980 | orchestrator | 2025-08-29 14:58:01.543983 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 14:58:01.543987 | orchestrator | Friday 29 August 2025 14:55:18 +0000 (0:00:01.773) 0:04:17.138 ********* 2025-08-29 14:58:01.544013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.544021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.544026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.544047 | orchestrator | 2025-08-29 14:58:01.544051 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 14:58:01.544055 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:03.560) 0:04:20.698 ********* 2025-08-29 14:58:01.544059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.544063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.544076 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.544089 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544093 | orchestrator | 2025-08-29 14:58:01.544097 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 14:58:01.544101 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:01.227) 0:04:21.926 ********* 2025-08-29 14:58:01.544105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544120 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544139 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544157 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544160 | orchestrator | 2025-08-29 14:58:01.544164 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 14:58:01.544168 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:00.840) 0:04:22.766 ********* 2025-08-29 14:58:01.544172 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.544176 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.544179 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.544183 | orchestrator | 2025-08-29 14:58:01.544187 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 14:58:01.544191 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:01.344) 0:04:24.111 ********* 2025-08-29 14:58:01.544195 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.544198 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.544202 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.544206 | orchestrator | 2025-08-29 14:58:01.544210 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 14:58:01.544213 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:02.191) 0:04:26.303 ********* 2025-08-29 14:58:01.544217 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.544221 | orchestrator | 2025-08-29 14:58:01.544225 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 14:58:01.544229 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:01.552) 0:04:27.855 ********* 2025-08-29 14:58:01.544243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.544255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.544268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.544316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544330 | orchestrator | 2025-08-29 14:58:01.544337 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 14:58:01.544344 | orchestrator | Friday 29 August 2025 14:55:34 +0000 (0:00:05.347) 0:04:33.202 ********* 2025-08-29 14:58:01.544363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.544373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544391 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.544399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.544431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.544439 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544443 | orchestrator | 2025-08-29 14:58:01.544446 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 14:58:01.544450 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:00.714) 0:04:33.917 ********* 2025-08-29 14:58:01.544454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544478 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544501 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:58:01.544523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544527 | orchestrator | 2025-08-29 14:58:01.544531 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 14:58:01.544535 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:01.541) 0:04:35.459 ********* 2025-08-29 14:58:01.544539 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.544542 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.544546 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.544550 | orchestrator | 2025-08-29 14:58:01.544553 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 14:58:01.544557 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:01.439) 0:04:36.898 ********* 2025-08-29 14:58:01.544561 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.544565 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.544568 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.544572 | orchestrator | 2025-08-29 14:58:01.544576 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 14:58:01.544580 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:02.228) 0:04:39.127 ********* 2025-08-29 14:58:01.544583 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.544587 | orchestrator | 2025-08-29 14:58:01.544591 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 14:58:01.544595 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:01.562) 0:04:40.689 ********* 2025-08-29 14:58:01.544599 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 14:58:01.544602 | orchestrator | 2025-08-29 14:58:01.544610 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 14:58:01.544614 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.912) 0:04:41.602 ********* 2025-08-29 14:58:01.544617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:58:01.544622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:58:01.544626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:58:01.544630 | orchestrator | 2025-08-29 14:58:01.544634 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 14:58:01.544646 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:05.014) 0:04:46.617 ********* 2025-08-29 14:58:01.544650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544654 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544664 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544676 | orchestrator | 2025-08-29 14:58:01.544680 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 14:58:01.544684 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:01.605) 0:04:48.222 ********* 2025-08-29 14:58:01.544691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:58:01.544695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:58:01.544699 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:58:01.544707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:58:01.544711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:58:01.544719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:58:01.544723 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544726 | orchestrator | 2025-08-29 14:58:01.544730 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:58:01.544734 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:01.942) 0:04:50.164 ********* 2025-08-29 14:58:01.544738 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.544741 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.544745 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.544749 | orchestrator | 2025-08-29 14:58:01.544753 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:58:01.544757 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:02.775) 0:04:52.940 ********* 2025-08-29 14:58:01.544760 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.544764 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.544768 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.544772 | orchestrator | 2025-08-29 14:58:01.544775 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 14:58:01.544787 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:03.908) 0:04:56.849 ********* 2025-08-29 14:58:01.544791 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 14:58:01.544795 | orchestrator | 2025-08-29 14:58:01.544799 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 14:58:01.544803 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:01.827) 0:04:58.676 ********* 2025-08-29 14:58:01.544809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544816 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544824 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544831 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544835 | orchestrator | 2025-08-29 14:58:01.544839 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 14:58:01.544843 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:01.653) 0:05:00.330 ********* 2025-08-29 14:58:01.544847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544851 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:58:01.544872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544878 | orchestrator | 2025-08-29 14:58:01.544885 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 14:58:01.544900 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:01.521) 0:05:01.851 ********* 2025-08-29 14:58:01.544905 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.544909 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.544912 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.544916 | orchestrator | 2025-08-29 14:58:01.544920 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:58:01.544924 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:02.481) 0:05:04.333 ********* 2025-08-29 14:58:01.544931 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.544934 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.544938 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.544942 | orchestrator | 2025-08-29 14:58:01.544946 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:58:01.544949 | orchestrator | Friday 29 August 2025 14:56:08 +0000 (0:00:02.604) 0:05:06.937 ********* 2025-08-29 14:58:01.544953 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.544957 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.544960 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.544964 | orchestrator | 2025-08-29 14:58:01.544971 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 14:58:01.544975 | orchestrator | Friday 29 August 2025 14:56:11 +0000 (0:00:03.315) 0:05:10.252 ********* 2025-08-29 14:58:01.544978 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 14:58:01.544982 | orchestrator | 2025-08-29 14:58:01.544986 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 14:58:01.544990 | orchestrator | Friday 29 August 2025 14:56:12 +0000 (0:00:01.015) 0:05:11.268 ********* 2025-08-29 14:58:01.545005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:58:01.545009 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:58:01.545017 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:58:01.545025 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545029 | orchestrator | 2025-08-29 14:58:01.545033 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 14:58:01.545037 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:01.591) 0:05:12.859 ********* 2025-08-29 14:58:01.545041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:58:01.545048 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:58:01.545064 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:58:01.545072 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545075 | orchestrator | 2025-08-29 14:58:01.545083 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 14:58:01.545087 | orchestrator | Friday 29 August 2025 14:56:15 +0000 (0:00:01.529) 0:05:14.389 ********* 2025-08-29 14:58:01.545091 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545102 | orchestrator | 2025-08-29 14:58:01.545106 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:58:01.545109 | orchestrator | Friday 29 August 2025 14:56:17 +0000 (0:00:01.547) 0:05:15.937 ********* 2025-08-29 14:58:01.545113 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.545117 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.545121 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.545124 | orchestrator | 2025-08-29 14:58:01.545128 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:58:01.545132 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:02.423) 0:05:18.361 ********* 2025-08-29 14:58:01.545136 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.545139 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.545143 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.545147 | orchestrator | 2025-08-29 14:58:01.545151 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 14:58:01.545154 | orchestrator | Friday 29 August 2025 14:56:22 +0000 (0:00:03.245) 0:05:21.606 ********* 2025-08-29 14:58:01.545158 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.545162 | orchestrator | 2025-08-29 14:58:01.545166 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 14:58:01.545169 | orchestrator | Friday 29 August 2025 14:56:24 +0000 (0:00:01.747) 0:05:23.354 ********* 2025-08-29 14:58:01.545173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.545181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:58:01.545185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.545208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.545212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:58:01.545219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.545242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.545246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:58:01.545250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.545265 | orchestrator | 2025-08-29 14:58:01.545269 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 14:58:01.545272 | orchestrator | Friday 29 August 2025 14:56:28 +0000 (0:00:03.989) 0:05:27.343 ********* 2025-08-29 14:58:01.545287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.545291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:58:01.545295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.545310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.545326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:58:01.545332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.545346 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.545355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:58:01.545367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:58:01.545381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:58:01.545391 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545398 | orchestrator | 2025-08-29 14:58:01.545405 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 14:58:01.545413 | orchestrator | Friday 29 August 2025 14:56:29 +0000 (0:00:01.297) 0:05:28.640 ********* 2025-08-29 14:58:01.545417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:58:01.545421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:58:01.545425 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:58:01.545432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:58:01.545436 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:58:01.545444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:58:01.545448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545451 | orchestrator | 2025-08-29 14:58:01.545455 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 14:58:01.545459 | orchestrator | Friday 29 August 2025 14:56:31 +0000 (0:00:01.263) 0:05:29.904 ********* 2025-08-29 14:58:01.545463 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.545466 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.545470 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.545474 | orchestrator | 2025-08-29 14:58:01.545478 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 14:58:01.545481 | orchestrator | Friday 29 August 2025 14:56:32 +0000 (0:00:01.383) 0:05:31.287 ********* 2025-08-29 14:58:01.545485 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.545489 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.545495 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.545499 | orchestrator | 2025-08-29 14:58:01.545502 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 14:58:01.545506 | orchestrator | Friday 29 August 2025 14:56:34 +0000 (0:00:02.067) 0:05:33.355 ********* 2025-08-29 14:58:01.545518 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.545522 | orchestrator | 2025-08-29 14:58:01.545526 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 14:58:01.545529 | orchestrator | Friday 29 August 2025 14:56:36 +0000 (0:00:01.677) 0:05:35.033 ********* 2025-08-29 14:58:01.545536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:58:01.545544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:58:01.545548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:58:01.545552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:58:01.545568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:58:01.545577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:58:01.545581 | orchestrator | 2025-08-29 14:58:01.545585 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 14:58:01.545589 | orchestrator | Friday 29 August 2025 14:56:41 +0000 (0:00:05.328) 0:05:40.361 ********* 2025-08-29 14:58:01.545593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:58:01.545597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:58:01.545601 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:58:01.545620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:58:01.545624 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:58:01.545662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:58:01.545667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545671 | orchestrator | 2025-08-29 14:58:01.545675 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 14:58:01.545678 | orchestrator | Friday 29 August 2025 14:56:42 +0000 (0:00:00.642) 0:05:41.004 ********* 2025-08-29 14:58:01.545690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:58:01.545694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:58:01.545702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:58:01.545707 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:58:01.545719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:58:01.545722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:58:01.545731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:58:01.545735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:58:01.545738 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:58:01.545746 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545750 | orchestrator | 2025-08-29 14:58:01.545754 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 14:58:01.545758 | orchestrator | Friday 29 August 2025 14:56:43 +0000 (0:00:01.639) 0:05:42.643 ********* 2025-08-29 14:58:01.545762 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545765 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545769 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545773 | orchestrator | 2025-08-29 14:58:01.545776 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 14:58:01.545780 | orchestrator | Friday 29 August 2025 14:56:44 +0000 (0:00:00.464) 0:05:43.107 ********* 2025-08-29 14:58:01.545784 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.545788 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.545791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.545795 | orchestrator | 2025-08-29 14:58:01.545799 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 14:58:01.545803 | orchestrator | Friday 29 August 2025 14:56:45 +0000 (0:00:01.458) 0:05:44.566 ********* 2025-08-29 14:58:01.545806 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.545810 | orchestrator | 2025-08-29 14:58:01.545814 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 14:58:01.545818 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:01.766) 0:05:46.332 ********* 2025-08-29 14:58:01.545822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:58:01.545837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:58:01.545844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.545856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:58:01.545860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:58:01.545870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:58:01.545892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:58:01.545896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.545900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.545922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:58:01.545930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:58:01.545934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:58:01.545938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:58:01.545945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.545966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.545970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.545974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:58:01.545985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:58:01.545989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546058 | orchestrator | 2025-08-29 14:58:01.546062 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 14:58:01.546066 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:04.422) 0:05:50.755 ********* 2025-08-29 14:58:01.546070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:58:01.546079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:58:01.546083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:58:01.546098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:58:01.546102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:58:01.546140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:58:01.546158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:58:01.546167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:58:01.546171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546203 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:58:01.546214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:58:01.546221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:58:01.546242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:58:01.546246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:58:01.546256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:58:01.546260 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546264 | orchestrator | 2025-08-29 14:58:01.546268 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 14:58:01.546272 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.916) 0:05:51.671 ********* 2025-08-29 14:58:01.546278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:58:01.546282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:58:01.546286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:58:01.546293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:58:01.546297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:58:01.546302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:58:01.546306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:58:01.546310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:58:01.546315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:58:01.546319 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:58:01.546331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:58:01.546335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:58:01.546338 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546342 | orchestrator | 2025-08-29 14:58:01.546346 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 14:58:01.546352 | orchestrator | Friday 29 August 2025 14:56:54 +0000 (0:00:01.429) 0:05:53.101 ********* 2025-08-29 14:58:01.546356 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546360 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546364 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546368 | orchestrator | 2025-08-29 14:58:01.546371 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 14:58:01.546375 | orchestrator | Friday 29 August 2025 14:56:54 +0000 (0:00:00.483) 0:05:53.585 ********* 2025-08-29 14:58:01.546379 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546386 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546390 | orchestrator | 2025-08-29 14:58:01.546394 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 14:58:01.546397 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:01.453) 0:05:55.039 ********* 2025-08-29 14:58:01.546404 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.546408 | orchestrator | 2025-08-29 14:58:01.546412 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 14:58:01.546418 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:01.550) 0:05:56.590 ********* 2025-08-29 14:58:01.546422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:58:01.546426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:58:01.546430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:58:01.546435 | orchestrator | 2025-08-29 14:58:01.546438 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 14:58:01.546442 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:02.383) 0:05:58.973 ********* 2025-08-29 14:58:01.546451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:58:01.546458 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:58:01.546466 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:58:01.546474 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546478 | orchestrator | 2025-08-29 14:58:01.546482 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 14:58:01.546486 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.397) 0:05:59.370 ********* 2025-08-29 14:58:01.546489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:58:01.546493 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:58:01.546501 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:58:01.546508 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546512 | orchestrator | 2025-08-29 14:58:01.546516 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 14:58:01.546523 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.591) 0:05:59.962 ********* 2025-08-29 14:58:01.546527 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546535 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546538 | orchestrator | 2025-08-29 14:58:01.546545 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 14:58:01.546548 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.657) 0:06:00.620 ********* 2025-08-29 14:58:01.546552 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546556 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546560 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546563 | orchestrator | 2025-08-29 14:58:01.546567 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 14:58:01.546571 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:01.245) 0:06:01.865 ********* 2025-08-29 14:58:01.546575 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:01.546578 | orchestrator | 2025-08-29 14:58:01.546582 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 14:58:01.546586 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:01.413) 0:06:03.279 ********* 2025-08-29 14:58:01.546591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.546596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.546600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.546610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.546617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.546621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:58:01.546625 | orchestrator | 2025-08-29 14:58:01.546628 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 14:58:01.546632 | orchestrator | Friday 29 August 2025 14:57:10 +0000 (0:00:06.011) 0:06:09.290 ********* 2025-08-29 14:58:01.546636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.546645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.546649 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.546659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.546663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.546674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:58:01.546678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546682 | orchestrator | 2025-08-29 14:58:01.546685 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 14:58:01.546692 | orchestrator | Friday 29 August 2025 14:57:11 +0000 (0:00:00.583) 0:06:09.873 ********* 2025-08-29 14:58:01.546696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546733 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:58:01.546755 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546759 | orchestrator | 2025-08-29 14:58:01.546763 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 14:58:01.546766 | orchestrator | Friday 29 August 2025 14:57:12 +0000 (0:00:01.001) 0:06:10.875 ********* 2025-08-29 14:58:01.546770 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.546774 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.546778 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.546781 | orchestrator | 2025-08-29 14:58:01.546785 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 14:58:01.546789 | orchestrator | Friday 29 August 2025 14:57:14 +0000 (0:00:02.465) 0:06:13.340 ********* 2025-08-29 14:58:01.546793 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.546796 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.546800 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.546804 | orchestrator | 2025-08-29 14:58:01.546808 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 14:58:01.546811 | orchestrator | Friday 29 August 2025 14:57:16 +0000 (0:00:02.073) 0:06:15.414 ********* 2025-08-29 14:58:01.546815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546819 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546826 | orchestrator | 2025-08-29 14:58:01.546830 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 14:58:01.546834 | orchestrator | Friday 29 August 2025 14:57:16 +0000 (0:00:00.291) 0:06:15.706 ********* 2025-08-29 14:58:01.546837 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546841 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546845 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546849 | orchestrator | 2025-08-29 14:58:01.546852 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 14:58:01.546858 | orchestrator | Friday 29 August 2025 14:57:17 +0000 (0:00:00.294) 0:06:16.000 ********* 2025-08-29 14:58:01.546862 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546873 | orchestrator | 2025-08-29 14:58:01.546877 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 14:58:01.546881 | orchestrator | Friday 29 August 2025 14:57:17 +0000 (0:00:00.276) 0:06:16.277 ********* 2025-08-29 14:58:01.546885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546888 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546892 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546896 | orchestrator | 2025-08-29 14:58:01.546900 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 14:58:01.546903 | orchestrator | Friday 29 August 2025 14:57:18 +0000 (0:00:00.529) 0:06:16.806 ********* 2025-08-29 14:58:01.546907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546911 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546915 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546918 | orchestrator | 2025-08-29 14:58:01.546924 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 14:58:01.546928 | orchestrator | Friday 29 August 2025 14:57:18 +0000 (0:00:00.324) 0:06:17.130 ********* 2025-08-29 14:58:01.546932 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.546935 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.546942 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.546946 | orchestrator | 2025-08-29 14:58:01.546949 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 14:58:01.546953 | orchestrator | Friday 29 August 2025 14:57:18 +0000 (0:00:00.494) 0:06:17.625 ********* 2025-08-29 14:58:01.546957 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.546961 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.546964 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.546968 | orchestrator | 2025-08-29 14:58:01.546972 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 14:58:01.546976 | orchestrator | Friday 29 August 2025 14:57:19 +0000 (0:00:00.871) 0:06:18.496 ********* 2025-08-29 14:58:01.546979 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.546983 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.546987 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.546990 | orchestrator | 2025-08-29 14:58:01.547003 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 14:58:01.547007 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:00.323) 0:06:18.820 ********* 2025-08-29 14:58:01.547011 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547014 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547018 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547022 | orchestrator | 2025-08-29 14:58:01.547025 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 14:58:01.547029 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:00.863) 0:06:19.684 ********* 2025-08-29 14:58:01.547033 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547037 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547041 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547044 | orchestrator | 2025-08-29 14:58:01.547048 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 14:58:01.547052 | orchestrator | Friday 29 August 2025 14:57:21 +0000 (0:00:00.866) 0:06:20.550 ********* 2025-08-29 14:58:01.547056 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547059 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547063 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547067 | orchestrator | 2025-08-29 14:58:01.547071 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 14:58:01.547074 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:01.082) 0:06:21.633 ********* 2025-08-29 14:58:01.547078 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.547082 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.547086 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.547089 | orchestrator | 2025-08-29 14:58:01.547093 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 14:58:01.547097 | orchestrator | Friday 29 August 2025 14:57:31 +0000 (0:00:08.178) 0:06:29.812 ********* 2025-08-29 14:58:01.547100 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547104 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547108 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547112 | orchestrator | 2025-08-29 14:58:01.547115 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 14:58:01.547119 | orchestrator | Friday 29 August 2025 14:57:31 +0000 (0:00:00.700) 0:06:30.513 ********* 2025-08-29 14:58:01.547123 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.547127 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.547131 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.547134 | orchestrator | 2025-08-29 14:58:01.547138 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 14:58:01.547142 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:13.782) 0:06:44.295 ********* 2025-08-29 14:58:01.547145 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547149 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547153 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547157 | orchestrator | 2025-08-29 14:58:01.547160 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 14:58:01.547167 | orchestrator | Friday 29 August 2025 14:57:46 +0000 (0:00:00.710) 0:06:45.005 ********* 2025-08-29 14:58:01.547170 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:01.547174 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:01.547178 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:01.547182 | orchestrator | 2025-08-29 14:58:01.547185 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 14:58:01.547189 | orchestrator | Friday 29 August 2025 14:57:50 +0000 (0:00:04.171) 0:06:49.177 ********* 2025-08-29 14:58:01.547193 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.547197 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.547200 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.547204 | orchestrator | 2025-08-29 14:58:01.547208 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 14:58:01.547212 | orchestrator | Friday 29 August 2025 14:57:50 +0000 (0:00:00.319) 0:06:49.497 ********* 2025-08-29 14:58:01.547218 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.547222 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.547226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.547230 | orchestrator | 2025-08-29 14:58:01.547234 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 14:58:01.547237 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:00.313) 0:06:49.810 ********* 2025-08-29 14:58:01.547241 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.547245 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.547249 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.547252 | orchestrator | 2025-08-29 14:58:01.547256 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 14:58:01.547260 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:00.328) 0:06:50.139 ********* 2025-08-29 14:58:01.547264 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.547268 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.547271 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.547275 | orchestrator | 2025-08-29 14:58:01.547279 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 14:58:01.547284 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:00.614) 0:06:50.753 ********* 2025-08-29 14:58:01.547288 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.547292 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.547295 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.547299 | orchestrator | 2025-08-29 14:58:01.547303 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 14:58:01.547307 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:00.334) 0:06:51.088 ********* 2025-08-29 14:58:01.547310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:01.547314 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:01.547318 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:01.547322 | orchestrator | 2025-08-29 14:58:01.547326 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 14:58:01.547329 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:00.327) 0:06:51.415 ********* 2025-08-29 14:58:01.547333 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547337 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547341 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547344 | orchestrator | 2025-08-29 14:58:01.547348 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 14:58:01.547352 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:04.758) 0:06:56.174 ********* 2025-08-29 14:58:01.547356 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:01.547359 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:01.547363 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:01.547367 | orchestrator | 2025-08-29 14:58:01.547371 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:58:01.547377 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:58:01.547381 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:58:01.547385 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:58:01.547389 | orchestrator | 2025-08-29 14:58:01.547393 | orchestrator | 2025-08-29 14:58:01.547396 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:58:01.547400 | orchestrator | Friday 29 August 2025 14:57:58 +0000 (0:00:01.058) 0:06:57.232 ********* 2025-08-29 14:58:01.547404 | orchestrator | =============================================================================== 2025-08-29 14:58:01.547408 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.78s 2025-08-29 14:58:01.547412 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.18s 2025-08-29 14:58:01.547415 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.01s 2025-08-29 14:58:01.547419 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.64s 2025-08-29 14:58:01.547423 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.35s 2025-08-29 14:58:01.547427 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.33s 2025-08-29 14:58:01.547430 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.21s 2025-08-29 14:58:01.547434 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.01s 2025-08-29 14:58:01.547438 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.96s 2025-08-29 14:58:01.547442 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.87s 2025-08-29 14:58:01.547445 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.76s 2025-08-29 14:58:01.547449 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.68s 2025-08-29 14:58:01.547453 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.64s 2025-08-29 14:58:01.547457 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.64s 2025-08-29 14:58:01.547460 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.52s 2025-08-29 14:58:01.547464 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.42s 2025-08-29 14:58:01.547468 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.27s 2025-08-29 14:58:01.547471 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.23s 2025-08-29 14:58:01.547475 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.17s 2025-08-29 14:58:01.547481 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.04s 2025-08-29 14:58:04.570562 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:04.573237 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:04.576783 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:04.576843 | orchestrator | 2025-08-29 14:58:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:07.611448 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:07.612155 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:07.615562 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:07.615857 | orchestrator | 2025-08-29 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:10.652473 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:10.652577 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:10.653281 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:10.653315 | orchestrator | 2025-08-29 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:13.687728 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:13.688297 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:13.689697 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:13.689758 | orchestrator | 2025-08-29 14:58:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:16.731725 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:16.732100 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:16.732799 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:16.732824 | orchestrator | 2025-08-29 14:58:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:19.778713 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:19.781183 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:19.781865 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:19.781914 | orchestrator | 2025-08-29 14:58:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:22.820908 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:22.821506 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:22.822623 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:22.822649 | orchestrator | 2025-08-29 14:58:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:25.874941 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:25.875090 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:25.875954 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:25.876001 | orchestrator | 2025-08-29 14:58:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:28.923854 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:28.927061 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:28.927888 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:28.928150 | orchestrator | 2025-08-29 14:58:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:31.978209 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:31.978687 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:31.979896 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:31.979931 | orchestrator | 2025-08-29 14:58:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:35.017725 | orchestrator | 2025-08-29 14:58:35 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:35.018325 | orchestrator | 2025-08-29 14:58:35 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:35.021175 | orchestrator | 2025-08-29 14:58:35 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:35.021197 | orchestrator | 2025-08-29 14:58:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:38.063579 | orchestrator | 2025-08-29 14:58:38 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:38.064170 | orchestrator | 2025-08-29 14:58:38 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:38.065579 | orchestrator | 2025-08-29 14:58:38 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:38.065613 | orchestrator | 2025-08-29 14:58:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:41.098635 | orchestrator | 2025-08-29 14:58:41 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:41.102329 | orchestrator | 2025-08-29 14:58:41 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:41.104340 | orchestrator | 2025-08-29 14:58:41 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:41.105437 | orchestrator | 2025-08-29 14:58:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:44.138694 | orchestrator | 2025-08-29 14:58:44 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:44.140942 | orchestrator | 2025-08-29 14:58:44 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:44.142138 | orchestrator | 2025-08-29 14:58:44 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:44.142399 | orchestrator | 2025-08-29 14:58:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:47.201620 | orchestrator | 2025-08-29 14:58:47 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:47.202657 | orchestrator | 2025-08-29 14:58:47 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:47.205045 | orchestrator | 2025-08-29 14:58:47 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:47.205089 | orchestrator | 2025-08-29 14:58:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:50.258484 | orchestrator | 2025-08-29 14:58:50 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:50.261721 | orchestrator | 2025-08-29 14:58:50 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:50.264631 | orchestrator | 2025-08-29 14:58:50 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:50.265308 | orchestrator | 2025-08-29 14:58:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:53.318501 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:53.320760 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:53.323326 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:53.323738 | orchestrator | 2025-08-29 14:58:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:56.380538 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:56.383510 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:56.385271 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:56.385617 | orchestrator | 2025-08-29 14:58:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:59.433339 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:58:59.435125 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:58:59.437522 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:58:59.437609 | orchestrator | 2025-08-29 14:58:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:02.524490 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:02.524594 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:02.524627 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:02.524640 | orchestrator | 2025-08-29 14:59:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:05.579199 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:05.579295 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:05.580652 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:05.580708 | orchestrator | 2025-08-29 14:59:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:08.635695 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:08.635801 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:08.638952 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:08.639365 | orchestrator | 2025-08-29 14:59:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:11.689651 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:11.691571 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:11.694168 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:11.694627 | orchestrator | 2025-08-29 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:14.750298 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:14.752085 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:14.755089 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:14.755268 | orchestrator | 2025-08-29 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:17.801714 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:17.803746 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:17.805044 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:17.805213 | orchestrator | 2025-08-29 14:59:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:20.850118 | orchestrator | 2025-08-29 14:59:20 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:20.852243 | orchestrator | 2025-08-29 14:59:20 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:20.854832 | orchestrator | 2025-08-29 14:59:20 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:20.854887 | orchestrator | 2025-08-29 14:59:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:23.910833 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:23.913079 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:23.916777 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:23.917151 | orchestrator | 2025-08-29 14:59:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:26.957507 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:26.958265 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:26.959389 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:26.959542 | orchestrator | 2025-08-29 14:59:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:30.014284 | orchestrator | 2025-08-29 14:59:30 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:30.016421 | orchestrator | 2025-08-29 14:59:30 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:30.019333 | orchestrator | 2025-08-29 14:59:30 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:30.019774 | orchestrator | 2025-08-29 14:59:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:33.077089 | orchestrator | 2025-08-29 14:59:33 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:33.078161 | orchestrator | 2025-08-29 14:59:33 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:33.080224 | orchestrator | 2025-08-29 14:59:33 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:33.080270 | orchestrator | 2025-08-29 14:59:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:36.134227 | orchestrator | 2025-08-29 14:59:36 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:36.136636 | orchestrator | 2025-08-29 14:59:36 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:36.139532 | orchestrator | 2025-08-29 14:59:36 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:36.139947 | orchestrator | 2025-08-29 14:59:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:39.181302 | orchestrator | 2025-08-29 14:59:39 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:39.181667 | orchestrator | 2025-08-29 14:59:39 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:39.184569 | orchestrator | 2025-08-29 14:59:39 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:39.184604 | orchestrator | 2025-08-29 14:59:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:42.237616 | orchestrator | 2025-08-29 14:59:42 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:42.242973 | orchestrator | 2025-08-29 14:59:42 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:42.246372 | orchestrator | 2025-08-29 14:59:42 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:42.246434 | orchestrator | 2025-08-29 14:59:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:45.290670 | orchestrator | 2025-08-29 14:59:45 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:45.295038 | orchestrator | 2025-08-29 14:59:45 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:45.296504 | orchestrator | 2025-08-29 14:59:45 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:45.296744 | orchestrator | 2025-08-29 14:59:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:48.337747 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:48.340002 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:48.341988 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:48.342133 | orchestrator | 2025-08-29 14:59:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:51.393372 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:51.394939 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:51.397027 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:51.397082 | orchestrator | 2025-08-29 14:59:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:54.444753 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:54.446976 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:54.449376 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:54.449448 | orchestrator | 2025-08-29 14:59:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:57.492197 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 14:59:57.494546 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 14:59:57.495856 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 14:59:57.495981 | orchestrator | 2025-08-29 14:59:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:00.531605 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:00.532522 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:00.533939 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 15:00:00.533987 | orchestrator | 2025-08-29 15:00:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:03.583362 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:03.584586 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:03.587650 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 15:00:03.587710 | orchestrator | 2025-08-29 15:00:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:06.620815 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:06.623584 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:06.625161 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 15:00:06.626550 | orchestrator | 2025-08-29 15:00:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:09.671264 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:09.671737 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:09.672780 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 15:00:09.672829 | orchestrator | 2025-08-29 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:12.732203 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:12.732921 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:12.735020 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 15:00:12.735056 | orchestrator | 2025-08-29 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:15.793377 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:15.794381 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:15.797965 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state STARTED 2025-08-29 15:00:15.798065 | orchestrator | 2025-08-29 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:18.860705 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:18.863942 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:18.870642 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task 8b76369f-e59f-4b25-8c0f-b572ea233628 is in state SUCCESS 2025-08-29 15:00:18.872978 | orchestrator | 2025-08-29 15:00:18.873024 | orchestrator | 2025-08-29 15:00:18.873037 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 15:00:18.873050 | orchestrator | 2025-08-29 15:00:18.873062 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:00:18.873074 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:00.747) 0:00:00.747 ********* 2025-08-29 15:00:18.873087 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.873099 | orchestrator | 2025-08-29 15:00:18.873110 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:00:18.873121 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:01.196) 0:00:01.943 ********* 2025-08-29 15:00:18.873132 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.873144 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.873155 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.873165 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.873176 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.873187 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.873197 | orchestrator | 2025-08-29 15:00:18.873225 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:00:18.873237 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:01.824) 0:00:03.768 ********* 2025-08-29 15:00:18.873247 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.873258 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.873269 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.873279 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.873290 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.873300 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.873311 | orchestrator | 2025-08-29 15:00:18.873322 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:00:18.873333 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.707) 0:00:04.475 ********* 2025-08-29 15:00:18.873344 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.873354 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.873367 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.873385 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.873397 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.873408 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.873418 | orchestrator | 2025-08-29 15:00:18.873429 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:00:18.873440 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:01.081) 0:00:05.557 ********* 2025-08-29 15:00:18.873451 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.873462 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.873472 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.873483 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.873493 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.873504 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.873514 | orchestrator | 2025-08-29 15:00:18.873525 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:00:18.873536 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:00.677) 0:00:06.234 ********* 2025-08-29 15:00:18.873546 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.873557 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.873567 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.873578 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.873588 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.873599 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.873609 | orchestrator | 2025-08-29 15:00:18.873620 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:00:18.873631 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.551) 0:00:06.785 ********* 2025-08-29 15:00:18.873641 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.873667 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.873678 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.873689 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.873699 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.873710 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.873720 | orchestrator | 2025-08-29 15:00:18.873731 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:00:18.874153 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.878) 0:00:07.664 ********* 2025-08-29 15:00:18.874174 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.874187 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.874198 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.874208 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.874219 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.874230 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.874241 | orchestrator | 2025-08-29 15:00:18.874252 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:00:18.874262 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.927) 0:00:08.592 ********* 2025-08-29 15:00:18.874273 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.874284 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.874295 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.874305 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.874316 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.874327 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.874337 | orchestrator | 2025-08-29 15:00:18.874348 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:00:18.874359 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:01.231) 0:00:09.823 ********* 2025-08-29 15:00:18.874370 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:18.874381 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:18.874393 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:18.874403 | orchestrator | 2025-08-29 15:00:18.874414 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:00:18.874425 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:00.563) 0:00:10.387 ********* 2025-08-29 15:00:18.874435 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.874446 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.874457 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.874467 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.874478 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.874488 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.874499 | orchestrator | 2025-08-29 15:00:18.874524 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:00:18.874536 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.183) 0:00:11.570 ********* 2025-08-29 15:00:18.874547 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:18.874558 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:18.874569 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:18.874579 | orchestrator | 2025-08-29 15:00:18.874590 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:00:18.874601 | orchestrator | Friday 29 August 2025 14:47:51 +0000 (0:00:03.308) 0:00:14.879 ********* 2025-08-29 15:00:18.874611 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:18.874623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:18.874633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:18.874730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.874745 | orchestrator | 2025-08-29 15:00:18.874766 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:00:18.874777 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:00.779) 0:00:15.658 ********* 2025-08-29 15:00:18.874804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.874819 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.874830 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.874910 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.874927 | orchestrator | 2025-08-29 15:00:18.874945 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:00:18.875065 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:00.642) 0:00:16.301 ********* 2025-08-29 15:00:18.875081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.875104 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.875246 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.875268 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.875471 | orchestrator | 2025-08-29 15:00:18.875492 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:00:18.875503 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.516) 0:00:16.817 ********* 2025-08-29 15:00:18.875517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 14:47:48.758470', 'end': '2025-08-29 14:47:49.036599', 'delta': '0:00:00.278129', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.875550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 14:47:49.962884', 'end': '2025-08-29 14:47:50.253793', 'delta': '0:00:00.290909', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.875586 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 14:47:50.776019', 'end': '2025-08-29 14:47:51.066716', 'delta': '0:00:00.290697', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.875598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.875609 | orchestrator | 2025-08-29 15:00:18.875621 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:00:18.875632 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.253) 0:00:17.071 ********* 2025-08-29 15:00:18.875643 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.875654 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.875664 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.875675 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.875686 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.875696 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.875755 | orchestrator | 2025-08-29 15:00:18.875767 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:00:18.875778 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:01.821) 0:00:18.892 ********* 2025-08-29 15:00:18.875788 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.875799 | orchestrator | 2025-08-29 15:00:18.875810 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:00:18.875874 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.638) 0:00:19.530 ********* 2025-08-29 15:00:18.875887 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.875898 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.875946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.875958 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.875969 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.875980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.875991 | orchestrator | 2025-08-29 15:00:18.876001 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:00:18.876012 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:01.388) 0:00:20.919 ********* 2025-08-29 15:00:18.876023 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876034 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.876044 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.876055 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.876065 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.876076 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.876087 | orchestrator | 2025-08-29 15:00:18.876097 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:00:18.876108 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:02.193) 0:00:23.112 ********* 2025-08-29 15:00:18.876119 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876130 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.876240 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.876261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.876278 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.876296 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.876315 | orchestrator | 2025-08-29 15:00:18.876334 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:00:18.876545 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:01.261) 0:00:24.374 ********* 2025-08-29 15:00:18.876565 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876576 | orchestrator | 2025-08-29 15:00:18.876587 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:00:18.876598 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.134) 0:00:24.508 ********* 2025-08-29 15:00:18.876609 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876619 | orchestrator | 2025-08-29 15:00:18.876630 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:00:18.876641 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.221) 0:00:24.730 ********* 2025-08-29 15:00:18.876651 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876662 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.876672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.876683 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.876694 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.876704 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.876715 | orchestrator | 2025-08-29 15:00:18.876726 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:00:18.876747 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.750) 0:00:25.480 ********* 2025-08-29 15:00:18.876759 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.876780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.876791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.876802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.876812 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.876823 | orchestrator | 2025-08-29 15:00:18.876833 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:00:18.876875 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.821) 0:00:26.302 ********* 2025-08-29 15:00:18.876894 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.876906 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.876916 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.876927 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.876937 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.876948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.876959 | orchestrator | 2025-08-29 15:00:18.876969 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:00:18.876989 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.660) 0:00:26.962 ********* 2025-08-29 15:00:18.877000 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.877010 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.877024 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.877042 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.877057 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.877085 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.877104 | orchestrator | 2025-08-29 15:00:18.877121 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:00:18.877137 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:00.898) 0:00:27.861 ********* 2025-08-29 15:00:18.877154 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.877171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.877189 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.877206 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.877222 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.877240 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.877257 | orchestrator | 2025-08-29 15:00:18.877275 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:00:18.877293 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.775) 0:00:28.637 ********* 2025-08-29 15:00:18.877327 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.877345 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.877363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.877382 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.877400 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.877419 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.877431 | orchestrator | 2025-08-29 15:00:18.877443 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:00:18.877462 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.979) 0:00:29.617 ********* 2025-08-29 15:00:18.877475 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.877486 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.877496 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.877507 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.877523 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.877536 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.877546 | orchestrator | 2025-08-29 15:00:18.877558 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:00:18.877568 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.579) 0:00:30.197 ********* 2025-08-29 15:00:18.877580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part1', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part14', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part15', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part16', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.877728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.877742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part1', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part14', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part15', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part16', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.877925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.877937 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.877949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.877982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part1', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part14', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part15', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part16', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.878165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9', 'dm-uuid-LVM-PSPH14GH09J0kA1RbVItmiOZIquYOG3k2u45bJBRBaFh2iYJjIb15CODRaJofD86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0', 'dm-uuid-LVM-O97dyzANmDt8UDhQLEsHrELT6wzi4qzyFe25sty9BiB38XEHjGmceShZKbZzbbST'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878244 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.878255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12', 'dm-uuid-LVM-PcnBM91jI969xeG2G7spnVSuPPQuboI2IdFfTcCUsQNwKETonsuK6rNQiC1GDbRm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63', 'dm-uuid-LVM-nQqFVEZid8ujOgcFssAfSQAYM1cLhlhU1nnw3Phm825bc5saJyUoGvtqQH3idFV2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LMClOH-HTcX-urtS-wxv0-f3dv-LUYl-ccXnxv', 'scsi-0QEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d', 'scsi-SQEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XZyA7j-qZag-aiwL-kgFz-9mi5-BfHq-dkJ9GF', 'scsi-0QEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e', 'scsi-SQEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba', 'scsi-SQEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.878533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LAiBWk-w2Ea-22E3-i5Oe-rKKc-qMf9-rDzPwx', 'scsi-0QEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8', 'scsi-SQEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uJTqqQ-Lt2i-w1fm-dhFc-ptEh-BsCY-RCX3FH', 'scsi-0QEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048', 'scsi-SQEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008', 'scsi-SQEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c', 'dm-uuid-LVM-dQwh4LB0g1qzbRKP9aVHn3E0vVB9cJBFvaYV1oXfrl50GIhqBubQZQbYq24RSU4B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec', 'dm-uuid-LVM-JXf4c6esfPqDz0wrFTQC8LaNYTcXKZDr2ceiUQy0TONpn0mSquCdR1hAyIo2oDVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878698 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.878729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:18.878971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.878992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oNGlDP-zHil-MfZ0-dBFL-53J0-JNRG-FM0WkY', 'scsi-0QEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166', 'scsi-SQEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.879011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EKS2Kg-ziXY-QzeE-q2JM-mBsR-U4k8-6dgOfS', 'scsi-0QEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf', 'scsi-SQEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.879030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a', 'scsi-SQEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.879059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:18.879086 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.879105 | orchestrator | 2025-08-29 15:00:18.879123 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:00:18.879141 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:01.139) 0:00:31.336 ********* 2025-08-29 15:00:18.879166 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.879185 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880214 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880252 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880265 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880276 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880335 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part1', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part14', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part15', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part16', 'scsi-SQEMU_QEMU_HARDDISK_6fb25367-3937-467c-ae17-e945c0f5ac09-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880358 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880371 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880387 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880407 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880419 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880438 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880449 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880461 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880500 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part1', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part14', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part15', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part16', 'scsi-SQEMU_QEMU_HARDDISK_b2c6c1a3-b680-4669-a768-f8e5d905aa15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880525 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.880551 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880597 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880615 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880627 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880641 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880653 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880681 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part1', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part14', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part15', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part16', 'scsi-SQEMU_QEMU_HARDDISK_a1c20fb4-039a-44dc-b429-5b04d60e1d33-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880702 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.880729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9', 'dm-uuid-LVM-PSPH14GH09J0kA1RbVItmiOZIquYOG3k2u45bJBRBaFh2iYJjIb15CODRaJofD86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0', 'dm-uuid-LVM-O97dyzANmDt8UDhQLEsHrELT6wzi4qzyFe25sty9BiB38XEHjGmceShZKbZzbbST'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880787 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.880801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12', 'dm-uuid-LVM-PcnBM91jI969xeG2G7spnVSuPPQuboI2IdFfTcCUsQNwKETonsuK6rNQiC1GDbRm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63', 'dm-uuid-LVM-nQqFVEZid8ujOgcFssAfSQAYM1cLhlhU1nnw3Phm825bc5saJyUoGvtqQH3idFV2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880876 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880977 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.880994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LAiBWk-w2Ea-22E3-i5Oe-rKKc-qMf9-rDzPwx', 'scsi-0QEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8', 'scsi-SQEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881052 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uJTqqQ-Lt2i-w1fm-dhFc-ptEh-BsCY-RCX3FH', 'scsi-0QEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048', 'scsi-SQEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008', 'scsi-SQEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881136 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.881152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c', 'dm-uuid-LVM-dQwh4LB0g1qzbRKP9aVHn3E0vVB9cJBFvaYV1oXfrl50GIhqBubQZQbYq24RSU4B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LMClOH-HTcX-urtS-wxv0-f3dv-LUYl-ccXnxv', 'scsi-0QEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d', 'scsi-SQEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec', 'dm-uuid-LVM-JXf4c6esfPqDz0wrFTQC8LaNYTcXKZDr2ceiUQy0TONpn0mSquCdR1hAyIo2oDVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XZyA7j-qZag-aiwL-kgFz-9mi5-BfHq-dkJ9GF', 'scsi-0QEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e', 'scsi-SQEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba', 'scsi-SQEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881331 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881343 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881355 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881367 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.881378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881412 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oNGlDP-zHil-MfZ0-dBFL-53J0-JNRG-FM0WkY', 'scsi-0QEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166', 'scsi-SQEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EKS2Kg-ziXY-QzeE-q2JM-mBsR-U4k8-6dgOfS', 'scsi-0QEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf', 'scsi-SQEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a', 'scsi-SQEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:18.881520 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.881531 | orchestrator | 2025-08-29 15:00:18.881543 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:00:18.881556 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:02.192) 0:00:33.528 ********* 2025-08-29 15:00:18.881567 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.881579 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.881590 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.881600 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.881611 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.881622 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.881632 | orchestrator | 2025-08-29 15:00:18.881644 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:00:18.881661 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:01.125) 0:00:34.653 ********* 2025-08-29 15:00:18.881672 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.881683 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.881694 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.881704 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.881715 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.881726 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.881736 | orchestrator | 2025-08-29 15:00:18.881748 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:00:18.881759 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:00.722) 0:00:35.376 ********* 2025-08-29 15:00:18.881769 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.881780 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.881791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.881802 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.881817 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.881828 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.881925 | orchestrator | 2025-08-29 15:00:18.881941 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:00:18.881953 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:00.853) 0:00:36.230 ********* 2025-08-29 15:00:18.881964 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.881975 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.881986 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.881998 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.882008 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.882211 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.882235 | orchestrator | 2025-08-29 15:00:18.882246 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:00:18.882257 | orchestrator | Friday 29 August 2025 14:48:13 +0000 (0:00:00.846) 0:00:37.078 ********* 2025-08-29 15:00:18.882269 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.882280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.882290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.882302 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.882313 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.882325 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.882335 | orchestrator | 2025-08-29 15:00:18.882361 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:00:18.882386 | orchestrator | Friday 29 August 2025 14:48:14 +0000 (0:00:00.819) 0:00:37.897 ********* 2025-08-29 15:00:18.882399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.882409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.882420 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.882431 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.882443 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.882454 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.882464 | orchestrator | 2025-08-29 15:00:18.882475 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:00:18.882485 | orchestrator | Friday 29 August 2025 14:48:14 +0000 (0:00:00.539) 0:00:38.437 ********* 2025-08-29 15:00:18.882495 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:18.882505 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 15:00:18.882515 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 15:00:18.882524 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 15:00:18.882534 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 15:00:18.882544 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:00:18.882554 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:00:18.882563 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:00:18.882584 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:00:18.882594 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:00:18.882604 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:00:18.882614 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:00:18.882624 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 15:00:18.882634 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:00:18.882643 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 15:00:18.882653 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:00:18.882663 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:00:18.882673 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:00:18.882683 | orchestrator | 2025-08-29 15:00:18.882694 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:00:18.882704 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:02.833) 0:00:41.270 ********* 2025-08-29 15:00:18.882714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:18.882724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:18.882734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:18.882744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.882754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 15:00:18.882763 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 15:00:18.882773 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 15:00:18.882782 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 15:00:18.882792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 15:00:18.882801 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 15:00:18.882813 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.882824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:00:18.882836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:00:18.882870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:00:18.882882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.882893 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:00:18.882904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:00:18.882915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:00:18.882926 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.882936 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.882947 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:00:18.882959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:00:18.882970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:00:18.882980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.882990 | orchestrator | 2025-08-29 15:00:18.883000 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:00:18.883016 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:01.389) 0:00:42.660 ********* 2025-08-29 15:00:18.883026 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.883036 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.883046 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.883056 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.883065 | orchestrator | 2025-08-29 15:00:18.883075 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:00:18.883086 | orchestrator | Friday 29 August 2025 14:48:20 +0000 (0:00:01.484) 0:00:44.145 ********* 2025-08-29 15:00:18.883102 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.883113 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.883123 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.883133 | orchestrator | 2025-08-29 15:00:18.883142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:00:18.883153 | orchestrator | Friday 29 August 2025 14:48:20 +0000 (0:00:00.297) 0:00:44.443 ********* 2025-08-29 15:00:18.883163 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.883172 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.883189 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.883200 | orchestrator | 2025-08-29 15:00:18.883211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:00:18.883220 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:00.947) 0:00:45.391 ********* 2025-08-29 15:00:18.883230 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.883240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.883250 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.883260 | orchestrator | 2025-08-29 15:00:18.883269 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:00:18.883279 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.964) 0:00:46.356 ********* 2025-08-29 15:00:18.883289 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.883298 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.883309 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.883318 | orchestrator | 2025-08-29 15:00:18.883328 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:00:18.883338 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:01.099) 0:00:47.455 ********* 2025-08-29 15:00:18.883347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.883357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.883367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.883376 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.883386 | orchestrator | 2025-08-29 15:00:18.883396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:00:18.883406 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.538) 0:00:47.993 ********* 2025-08-29 15:00:18.883416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.883425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.883435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.883445 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.883455 | orchestrator | 2025-08-29 15:00:18.883464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:00:18.883474 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:00.641) 0:00:48.635 ********* 2025-08-29 15:00:18.883483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.883493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.883503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.883513 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.883523 | orchestrator | 2025-08-29 15:00:18.883533 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:00:18.883543 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:00.663) 0:00:49.298 ********* 2025-08-29 15:00:18.883552 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.883562 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.883572 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.883582 | orchestrator | 2025-08-29 15:00:18.883591 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:00:18.883601 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:00.513) 0:00:49.812 ********* 2025-08-29 15:00:18.883611 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:00:18.883627 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:00:18.883637 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:00:18.883647 | orchestrator | 2025-08-29 15:00:18.883657 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:00:18.883667 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:00.849) 0:00:50.662 ********* 2025-08-29 15:00:18.883677 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:18.883687 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:18.883697 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:18.883707 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 15:00:18.883716 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:00:18.883726 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:00:18.883736 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:00:18.883745 | orchestrator | 2025-08-29 15:00:18.883755 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:00:18.883765 | orchestrator | Friday 29 August 2025 14:48:28 +0000 (0:00:00.941) 0:00:51.603 ********* 2025-08-29 15:00:18.883780 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:18.883790 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:18.883799 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:18.883809 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 15:00:18.883819 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:00:18.883829 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:00:18.883852 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:00:18.883864 | orchestrator | 2025-08-29 15:00:18.883874 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.883884 | orchestrator | Friday 29 August 2025 14:48:30 +0000 (0:00:02.381) 0:00:53.984 ********* 2025-08-29 15:00:18.883902 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.883913 | orchestrator | 2025-08-29 15:00:18.883923 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.883933 | orchestrator | Friday 29 August 2025 14:48:32 +0000 (0:00:01.759) 0:00:55.744 ********* 2025-08-29 15:00:18.883943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.883953 | orchestrator | 2025-08-29 15:00:18.883962 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.883972 | orchestrator | Friday 29 August 2025 14:48:33 +0000 (0:00:01.624) 0:00:57.368 ********* 2025-08-29 15:00:18.883982 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.883992 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.884002 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.884012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.884022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.884031 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.884041 | orchestrator | 2025-08-29 15:00:18.884051 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.884061 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:00.952) 0:00:58.321 ********* 2025-08-29 15:00:18.884070 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.884090 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.884100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.884110 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.884120 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.884129 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.884139 | orchestrator | 2025-08-29 15:00:18.884149 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.884160 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:01.286) 0:00:59.608 ********* 2025-08-29 15:00:18.884170 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.884179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.884189 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.884199 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.884209 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.884219 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.884230 | orchestrator | 2025-08-29 15:00:18.884239 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.884249 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:01.456) 0:01:01.065 ********* 2025-08-29 15:00:18.884259 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.884269 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.884279 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.884289 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.884298 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.884309 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.884319 | orchestrator | 2025-08-29 15:00:18.884329 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.884339 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:01.380) 0:01:02.445 ********* 2025-08-29 15:00:18.884349 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.884359 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.884369 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.884378 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.884388 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.884398 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.884408 | orchestrator | 2025-08-29 15:00:18.884418 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.884428 | orchestrator | Friday 29 August 2025 14:48:39 +0000 (0:00:01.038) 0:01:03.484 ********* 2025-08-29 15:00:18.884438 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.884447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.884457 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.884467 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.884476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.884486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.884496 | orchestrator | 2025-08-29 15:00:18.884507 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.884516 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:00.725) 0:01:04.209 ********* 2025-08-29 15:00:18.884527 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.884537 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.884546 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.884555 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.884565 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.884575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.884584 | orchestrator | 2025-08-29 15:00:18.884595 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.884604 | orchestrator | Friday 29 August 2025 14:48:41 +0000 (0:00:00.842) 0:01:05.052 ********* 2025-08-29 15:00:18.884619 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.884629 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.884638 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.884648 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.884658 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.884679 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.884688 | orchestrator | 2025-08-29 15:00:18.884698 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.884708 | orchestrator | Friday 29 August 2025 14:48:42 +0000 (0:00:01.178) 0:01:06.230 ********* 2025-08-29 15:00:18.884718 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.884728 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.884738 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.884748 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.884758 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.884767 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.884777 | orchestrator | 2025-08-29 15:00:18.884787 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.884797 | orchestrator | Friday 29 August 2025 14:48:44 +0000 (0:00:01.580) 0:01:07.810 ********* 2025-08-29 15:00:18.884807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.884817 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.884827 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.884837 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.884905 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.884916 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.884925 | orchestrator | 2025-08-29 15:00:18.884935 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.884945 | orchestrator | Friday 29 August 2025 14:48:44 +0000 (0:00:00.695) 0:01:08.505 ********* 2025-08-29 15:00:18.884955 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.884965 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.884974 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.884982 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.884990 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.884997 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.885005 | orchestrator | 2025-08-29 15:00:18.885013 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.885021 | orchestrator | Friday 29 August 2025 14:48:46 +0000 (0:00:01.238) 0:01:09.744 ********* 2025-08-29 15:00:18.885029 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885037 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885045 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885053 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.885061 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.885069 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.885078 | orchestrator | 2025-08-29 15:00:18.885086 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.885094 | orchestrator | Friday 29 August 2025 14:48:47 +0000 (0:00:01.011) 0:01:10.755 ********* 2025-08-29 15:00:18.885102 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885110 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885118 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885126 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.885134 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.885142 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.885150 | orchestrator | 2025-08-29 15:00:18.885158 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.885166 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:01.293) 0:01:12.049 ********* 2025-08-29 15:00:18.885174 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885182 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885190 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885198 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.885206 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.885214 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.885221 | orchestrator | 2025-08-29 15:00:18.885230 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.885238 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.909) 0:01:12.958 ********* 2025-08-29 15:00:18.885253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885261 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885269 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885277 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.885284 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.885292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.885300 | orchestrator | 2025-08-29 15:00:18.885308 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.885316 | orchestrator | Friday 29 August 2025 14:48:50 +0000 (0:00:01.224) 0:01:14.182 ********* 2025-08-29 15:00:18.885324 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885348 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.885356 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.885364 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.885372 | orchestrator | 2025-08-29 15:00:18.885380 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.885388 | orchestrator | Friday 29 August 2025 14:48:51 +0000 (0:00:00.842) 0:01:15.025 ********* 2025-08-29 15:00:18.885396 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.885403 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.885411 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.885419 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.885427 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.885434 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.885442 | orchestrator | 2025-08-29 15:00:18.885450 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.885458 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.941) 0:01:15.966 ********* 2025-08-29 15:00:18.885466 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.885474 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.885482 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.885490 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.885498 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.885506 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.885514 | orchestrator | 2025-08-29 15:00:18.885522 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.885530 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.710) 0:01:16.677 ********* 2025-08-29 15:00:18.885542 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.885551 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.885559 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.885567 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.885575 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.885582 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.885590 | orchestrator | 2025-08-29 15:00:18.885598 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 15:00:18.885606 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:01.388) 0:01:18.065 ********* 2025-08-29 15:00:18.885614 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.885622 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.885630 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.885639 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.885647 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.885655 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.885662 | orchestrator | 2025-08-29 15:00:18.885671 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 15:00:18.885683 | orchestrator | Friday 29 August 2025 14:48:56 +0000 (0:00:02.058) 0:01:20.124 ********* 2025-08-29 15:00:18.885696 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.885710 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.885724 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.885744 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.885764 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.885776 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.885788 | orchestrator | 2025-08-29 15:00:18.885802 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 15:00:18.885815 | orchestrator | Friday 29 August 2025 14:48:58 +0000 (0:00:02.301) 0:01:22.426 ********* 2025-08-29 15:00:18.885828 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.885862 | orchestrator | 2025-08-29 15:00:18.885878 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 15:00:18.885890 | orchestrator | Friday 29 August 2025 14:49:00 +0000 (0:00:01.228) 0:01:23.655 ********* 2025-08-29 15:00:18.885898 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885907 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885914 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885922 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.885930 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.885938 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.885946 | orchestrator | 2025-08-29 15:00:18.885955 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 15:00:18.885963 | orchestrator | Friday 29 August 2025 14:49:01 +0000 (0:00:00.933) 0:01:24.588 ********* 2025-08-29 15:00:18.885971 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.885979 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.885986 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.885994 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886002 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886010 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886044 | orchestrator | 2025-08-29 15:00:18.886053 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 15:00:18.886062 | orchestrator | Friday 29 August 2025 14:49:01 +0000 (0:00:00.711) 0:01:25.299 ********* 2025-08-29 15:00:18.886070 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:18.886078 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:18.886086 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:18.886094 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:18.886102 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:18.886111 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:18.886119 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:18.886127 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:18.886135 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:18.886142 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:18.886150 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:18.886158 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:18.886166 | orchestrator | 2025-08-29 15:00:18.886174 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 15:00:18.886182 | orchestrator | Friday 29 August 2025 14:49:04 +0000 (0:00:02.315) 0:01:27.615 ********* 2025-08-29 15:00:18.886190 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.886197 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.886205 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.886213 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.886221 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.886235 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.886243 | orchestrator | 2025-08-29 15:00:18.886251 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 15:00:18.886259 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:01.017) 0:01:28.633 ********* 2025-08-29 15:00:18.886267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886275 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.886282 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.886290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886298 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886306 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886313 | orchestrator | 2025-08-29 15:00:18.886327 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 15:00:18.886335 | orchestrator | Friday 29 August 2025 14:49:06 +0000 (0:00:01.131) 0:01:29.764 ********* 2025-08-29 15:00:18.886343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886351 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.886358 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.886366 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886374 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886381 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886389 | orchestrator | 2025-08-29 15:00:18.886397 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 15:00:18.886405 | orchestrator | Friday 29 August 2025 14:49:06 +0000 (0:00:00.645) 0:01:30.410 ********* 2025-08-29 15:00:18.886412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886420 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.886429 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.886436 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886444 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886451 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886459 | orchestrator | 2025-08-29 15:00:18.886467 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 15:00:18.886489 | orchestrator | Friday 29 August 2025 14:49:07 +0000 (0:00:00.773) 0:01:31.183 ********* 2025-08-29 15:00:18.886498 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.886506 | orchestrator | 2025-08-29 15:00:18.886514 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 15:00:18.886522 | orchestrator | Friday 29 August 2025 14:49:09 +0000 (0:00:01.481) 0:01:32.665 ********* 2025-08-29 15:00:18.886530 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.886538 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.886546 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.886554 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.886561 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.886569 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.886577 | orchestrator | 2025-08-29 15:00:18.886585 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 15:00:18.886593 | orchestrator | Friday 29 August 2025 14:51:05 +0000 (0:01:56.678) 0:03:29.344 ********* 2025-08-29 15:00:18.886601 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:18.886609 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:18.886617 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:18.886625 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886633 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:18.886641 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:18.886648 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:18.886662 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.886670 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:18.886678 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:18.886686 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:18.886694 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.886702 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:18.886710 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:18.886718 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:18.886726 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886734 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:18.886742 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:18.886750 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:18.886758 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886766 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:18.886773 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:18.886781 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:18.886789 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886797 | orchestrator | 2025-08-29 15:00:18.886805 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 15:00:18.886813 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:01.203) 0:03:30.547 ********* 2025-08-29 15:00:18.886821 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886829 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.886837 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.886860 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886868 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886876 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886884 | orchestrator | 2025-08-29 15:00:18.886892 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 15:00:18.886899 | orchestrator | Friday 29 August 2025 14:51:07 +0000 (0:00:00.735) 0:03:31.282 ********* 2025-08-29 15:00:18.886907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886915 | orchestrator | 2025-08-29 15:00:18.886923 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 15:00:18.886935 | orchestrator | Friday 29 August 2025 14:51:07 +0000 (0:00:00.118) 0:03:31.400 ********* 2025-08-29 15:00:18.886943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.886951 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.886959 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.886967 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.886974 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.886982 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.886990 | orchestrator | 2025-08-29 15:00:18.886998 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 15:00:18.887006 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:00.848) 0:03:32.249 ********* 2025-08-29 15:00:18.887014 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887037 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887045 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887053 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887061 | orchestrator | 2025-08-29 15:00:18.887069 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 15:00:18.887083 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.652) 0:03:32.901 ********* 2025-08-29 15:00:18.887090 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887098 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887112 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887120 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887128 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887136 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887144 | orchestrator | 2025-08-29 15:00:18.887152 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 15:00:18.887159 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.948) 0:03:33.849 ********* 2025-08-29 15:00:18.887167 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.887175 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.887183 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.887191 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.887199 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.887206 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.887214 | orchestrator | 2025-08-29 15:00:18.887222 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 15:00:18.887230 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:03.512) 0:03:37.362 ********* 2025-08-29 15:00:18.887238 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.887245 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.887253 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.887261 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.887269 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.887276 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.887284 | orchestrator | 2025-08-29 15:00:18.887292 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 15:00:18.887300 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:01.306) 0:03:38.668 ********* 2025-08-29 15:00:18.887308 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.887317 | orchestrator | 2025-08-29 15:00:18.887325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 15:00:18.887333 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:01.139) 0:03:39.807 ********* 2025-08-29 15:00:18.887341 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887364 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887372 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887380 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887388 | orchestrator | 2025-08-29 15:00:18.887396 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 15:00:18.887403 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.720) 0:03:40.528 ********* 2025-08-29 15:00:18.887411 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887419 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887426 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887434 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887442 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887450 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887457 | orchestrator | 2025-08-29 15:00:18.887465 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 15:00:18.887473 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:01.024) 0:03:41.553 ********* 2025-08-29 15:00:18.887481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887489 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887496 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887504 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887512 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887525 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887533 | orchestrator | 2025-08-29 15:00:18.887540 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 15:00:18.887548 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:00.630) 0:03:42.183 ********* 2025-08-29 15:00:18.887556 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887564 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887579 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887587 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887595 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887602 | orchestrator | 2025-08-29 15:00:18.887610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 15:00:18.887618 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:01.222) 0:03:43.406 ********* 2025-08-29 15:00:18.887626 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887634 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887641 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887657 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887664 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887672 | orchestrator | 2025-08-29 15:00:18.887685 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 15:00:18.887693 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:01.087) 0:03:44.494 ********* 2025-08-29 15:00:18.887701 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887708 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887732 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887739 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887747 | orchestrator | 2025-08-29 15:00:18.887755 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 15:00:18.887763 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:01.666) 0:03:46.161 ********* 2025-08-29 15:00:18.887771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887794 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887810 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887818 | orchestrator | 2025-08-29 15:00:18.887825 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 15:00:18.887838 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:01.206) 0:03:47.368 ********* 2025-08-29 15:00:18.887884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.887892 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.887900 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.887908 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.887916 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.887924 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.887931 | orchestrator | 2025-08-29 15:00:18.887939 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 15:00:18.887947 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:01.174) 0:03:48.543 ********* 2025-08-29 15:00:18.887955 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.887963 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.887971 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.887979 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.887987 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.887995 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.888003 | orchestrator | 2025-08-29 15:00:18.888010 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 15:00:18.888025 | orchestrator | Friday 29 August 2025 14:51:26 +0000 (0:00:01.469) 0:03:50.012 ********* 2025-08-29 15:00:18.888033 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.888041 | orchestrator | 2025-08-29 15:00:18.888049 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 15:00:18.888057 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:01.425) 0:03:51.438 ********* 2025-08-29 15:00:18.888064 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 15:00:18.888072 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 15:00:18.888080 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 15:00:18.888088 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 15:00:18.888096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 15:00:18.888104 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 15:00:18.888112 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 15:00:18.888120 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 15:00:18.888127 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 15:00:18.888135 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 15:00:18.888143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:18.888151 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 15:00:18.888159 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 15:00:18.888166 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:18.888174 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:18.888182 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:18.888190 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:18.888198 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:18.888206 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:18.888214 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:18.888222 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:18.888229 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:18.888236 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:18.888242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:18.888249 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:18.888255 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:18.888262 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:18.888268 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:18.888275 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:18.888282 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:18.888288 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:18.888295 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:18.888301 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:18.888312 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:18.888319 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:18.888326 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:18.888332 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:18.888339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:18.888353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:18.888360 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:18.888366 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:18.888373 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:18.888380 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:18.888386 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:18.888393 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:18.888400 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:18.888411 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:18.888418 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:18.888425 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:18.888431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:18.888438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:18.888444 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:18.888451 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:18.888458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:18.888464 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:18.888471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:18.888477 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:18.888484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:18.888491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:18.888497 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:18.888504 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:18.888510 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:18.888517 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:18.888523 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:18.888530 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:18.888536 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:18.888543 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:18.888550 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:18.888556 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:18.888563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:18.888570 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:18.888576 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:18.888583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:18.888589 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:18.888596 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:18.888602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:18.888609 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:18.888616 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:18.888627 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 15:00:18.888634 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:18.888640 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:18.888647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:18.888654 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:18.888660 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 15:00:18.888667 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:18.888673 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 15:00:18.888680 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:18.888686 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 15:00:18.888693 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 15:00:18.888700 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 15:00:18.888711 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 15:00:18.888718 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 15:00:18.888725 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 15:00:18.888732 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 15:00:18.888738 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 15:00:18.888745 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 15:00:18.888751 | orchestrator | 2025-08-29 15:00:18.888758 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 15:00:18.888765 | orchestrator | Friday 29 August 2025 14:51:34 +0000 (0:00:06.560) 0:03:57.998 ********* 2025-08-29 15:00:18.888771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.888778 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.888785 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.888791 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.888798 | orchestrator | 2025-08-29 15:00:18.888805 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 15:00:18.888815 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:01.224) 0:03:59.223 ********* 2025-08-29 15:00:18.888821 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.888829 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.888836 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.888857 | orchestrator | 2025-08-29 15:00:18.888864 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 15:00:18.888870 | orchestrator | Friday 29 August 2025 14:51:36 +0000 (0:00:00.988) 0:04:00.211 ********* 2025-08-29 15:00:18.888877 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.888884 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.888891 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.888897 | orchestrator | 2025-08-29 15:00:18.888904 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 15:00:18.888910 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:01.360) 0:04:01.572 ********* 2025-08-29 15:00:18.888922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.888929 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.888935 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.888942 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.888949 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.888956 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.888962 | orchestrator | 2025-08-29 15:00:18.888969 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 15:00:18.888976 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:01.259) 0:04:02.831 ********* 2025-08-29 15:00:18.888982 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.888989 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.888996 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889002 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.889009 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.889016 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.889023 | orchestrator | 2025-08-29 15:00:18.889030 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 15:00:18.889037 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:00.766) 0:04:03.598 ********* 2025-08-29 15:00:18.889043 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889050 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889057 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889063 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889070 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889076 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889083 | orchestrator | 2025-08-29 15:00:18.889090 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 15:00:18.889096 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:01.075) 0:04:04.674 ********* 2025-08-29 15:00:18.889103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889110 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889123 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889130 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889137 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889143 | orchestrator | 2025-08-29 15:00:18.889150 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 15:00:18.889157 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:00.626) 0:04:05.300 ********* 2025-08-29 15:00:18.889164 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889177 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889183 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889190 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889196 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889204 | orchestrator | 2025-08-29 15:00:18.889210 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 15:00:18.889217 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:00.942) 0:04:06.243 ********* 2025-08-29 15:00:18.889224 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889231 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889248 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889261 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889268 | orchestrator | 2025-08-29 15:00:18.889274 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 15:00:18.889281 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:00.729) 0:04:06.972 ********* 2025-08-29 15:00:18.889288 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889295 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889306 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889313 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889326 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889333 | orchestrator | 2025-08-29 15:00:18.889340 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 15:00:18.889347 | orchestrator | Friday 29 August 2025 14:51:44 +0000 (0:00:01.076) 0:04:08.049 ********* 2025-08-29 15:00:18.889354 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889360 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889372 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889379 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889386 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889392 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889399 | orchestrator | 2025-08-29 15:00:18.889406 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 15:00:18.889412 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:00.688) 0:04:08.737 ********* 2025-08-29 15:00:18.889419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889440 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.889446 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.889453 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.889459 | orchestrator | 2025-08-29 15:00:18.889466 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 15:00:18.889473 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:03.899) 0:04:12.637 ********* 2025-08-29 15:00:18.889480 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889487 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889493 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889500 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.889506 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.889513 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.889520 | orchestrator | 2025-08-29 15:00:18.889527 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 15:00:18.889533 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:00.834) 0:04:13.472 ********* 2025-08-29 15:00:18.889540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889546 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889560 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.889566 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.889573 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.889580 | orchestrator | 2025-08-29 15:00:18.889586 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 15:00:18.889593 | orchestrator | Friday 29 August 2025 14:51:51 +0000 (0:00:01.209) 0:04:14.681 ********* 2025-08-29 15:00:18.889599 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889606 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889613 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889620 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889626 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889633 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889639 | orchestrator | 2025-08-29 15:00:18.889646 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 15:00:18.889652 | orchestrator | Friday 29 August 2025 14:51:51 +0000 (0:00:00.765) 0:04:15.446 ********* 2025-08-29 15:00:18.889659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889666 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889679 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.889694 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.889702 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.889708 | orchestrator | 2025-08-29 15:00:18.889715 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 15:00:18.889721 | orchestrator | Friday 29 August 2025 14:51:52 +0000 (0:00:01.002) 0:04:16.449 ********* 2025-08-29 15:00:18.889728 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889735 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889743 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 15:00:18.889752 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889764 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 15:00:18.889772 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 15:00:18.889779 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 15:00:18.889786 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889793 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889804 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 15:00:18.889811 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 15:00:18.889818 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889825 | orchestrator | 2025-08-29 15:00:18.889831 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 15:00:18.889852 | orchestrator | Friday 29 August 2025 14:51:53 +0000 (0:00:00.862) 0:04:17.312 ********* 2025-08-29 15:00:18.889859 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889879 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889886 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889892 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889899 | orchestrator | 2025-08-29 15:00:18.889905 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 15:00:18.889912 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:00.966) 0:04:18.279 ********* 2025-08-29 15:00:18.889919 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889925 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.889944 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.889951 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.889958 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.889964 | orchestrator | 2025-08-29 15:00:18.889971 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:00:18.889978 | orchestrator | Friday 29 August 2025 14:51:55 +0000 (0:00:00.736) 0:04:19.016 ********* 2025-08-29 15:00:18.889985 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.889991 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.889998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890005 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.890012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.890049 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.890058 | orchestrator | 2025-08-29 15:00:18.890065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:00:18.890071 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:01.043) 0:04:20.059 ********* 2025-08-29 15:00:18.890078 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890085 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.890092 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890098 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.890105 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.890111 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.890118 | orchestrator | 2025-08-29 15:00:18.890125 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:00:18.890131 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:00.768) 0:04:20.828 ********* 2025-08-29 15:00:18.890138 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890144 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.890151 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890157 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.890164 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.890171 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.890177 | orchestrator | 2025-08-29 15:00:18.890184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:00:18.890191 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:01.146) 0:04:21.975 ********* 2025-08-29 15:00:18.890197 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890204 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.890210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890217 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.890224 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.890230 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.890237 | orchestrator | 2025-08-29 15:00:18.890244 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:00:18.890250 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:01.048) 0:04:23.024 ********* 2025-08-29 15:00:18.890257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 15:00:18.890264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 15:00:18.890274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 15:00:18.890281 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890287 | orchestrator | 2025-08-29 15:00:18.890294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:00:18.890300 | orchestrator | Friday 29 August 2025 14:52:00 +0000 (0:00:00.756) 0:04:23.780 ********* 2025-08-29 15:00:18.890307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 15:00:18.890314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 15:00:18.890320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 15:00:18.890327 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890339 | orchestrator | 2025-08-29 15:00:18.890346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:00:18.890353 | orchestrator | Friday 29 August 2025 14:52:01 +0000 (0:00:00.814) 0:04:24.595 ********* 2025-08-29 15:00:18.890360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 15:00:18.890366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 15:00:18.890373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 15:00:18.890380 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890386 | orchestrator | 2025-08-29 15:00:18.890404 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:00:18.890411 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:01.187) 0:04:25.782 ********* 2025-08-29 15:00:18.890419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890425 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.890432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890439 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.890445 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.890452 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.890459 | orchestrator | 2025-08-29 15:00:18.890466 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:00:18.890472 | orchestrator | Friday 29 August 2025 14:52:03 +0000 (0:00:01.053) 0:04:26.836 ********* 2025-08-29 15:00:18.890479 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 15:00:18.890486 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890493 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 15:00:18.890500 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.890506 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 15:00:18.890513 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890520 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:00:18.890526 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:00:18.890533 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:00:18.890539 | orchestrator | 2025-08-29 15:00:18.890546 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 15:00:18.890553 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:02.912) 0:04:29.748 ********* 2025-08-29 15:00:18.890559 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.890566 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.890573 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.890579 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.890586 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.890592 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.890599 | orchestrator | 2025-08-29 15:00:18.890606 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:18.890613 | orchestrator | Friday 29 August 2025 14:52:09 +0000 (0:00:03.314) 0:04:33.063 ********* 2025-08-29 15:00:18.890619 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.890626 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.890633 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.890639 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.890646 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.890652 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.890659 | orchestrator | 2025-08-29 15:00:18.890666 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:00:18.890672 | orchestrator | Friday 29 August 2025 14:52:10 +0000 (0:00:01.422) 0:04:34.486 ********* 2025-08-29 15:00:18.890679 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.890686 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.890693 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.890699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.890706 | orchestrator | 2025-08-29 15:00:18.890717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:00:18.890723 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:01.333) 0:04:35.820 ********* 2025-08-29 15:00:18.890730 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.890737 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.890744 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.890750 | orchestrator | 2025-08-29 15:00:18.890757 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:00:18.890764 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:00.373) 0:04:36.194 ********* 2025-08-29 15:00:18.890770 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.890777 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.890783 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.890790 | orchestrator | 2025-08-29 15:00:18.890797 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:00:18.890803 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:01.258) 0:04:37.453 ********* 2025-08-29 15:00:18.890811 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:18.890818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:18.890824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:18.890831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890837 | orchestrator | 2025-08-29 15:00:18.890859 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:00:18.890866 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:00.899) 0:04:38.353 ********* 2025-08-29 15:00:18.890872 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.890882 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.890889 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.890896 | orchestrator | 2025-08-29 15:00:18.890902 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:00:18.890909 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.660) 0:04:39.013 ********* 2025-08-29 15:00:18.890916 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.890922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.890929 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.890935 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.890942 | orchestrator | 2025-08-29 15:00:18.890949 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:00:18.890955 | orchestrator | Friday 29 August 2025 14:52:16 +0000 (0:00:00.916) 0:04:39.930 ********* 2025-08-29 15:00:18.890962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.890969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.890976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.890982 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.890989 | orchestrator | 2025-08-29 15:00:18.891000 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:00:18.891007 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:00.701) 0:04:40.631 ********* 2025-08-29 15:00:18.891014 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891020 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.891027 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.891033 | orchestrator | 2025-08-29 15:00:18.891040 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:00:18.891047 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:00.626) 0:04:41.258 ********* 2025-08-29 15:00:18.891053 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891060 | orchestrator | 2025-08-29 15:00:18.891066 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:00:18.891073 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:00.260) 0:04:41.518 ********* 2025-08-29 15:00:18.891085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891092 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.891098 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.891105 | orchestrator | 2025-08-29 15:00:18.891111 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:00:18.891118 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.377) 0:04:41.896 ********* 2025-08-29 15:00:18.891124 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891131 | orchestrator | 2025-08-29 15:00:18.891138 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:00:18.891144 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.243) 0:04:42.139 ********* 2025-08-29 15:00:18.891151 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891157 | orchestrator | 2025-08-29 15:00:18.891164 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:00:18.891170 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.246) 0:04:42.385 ********* 2025-08-29 15:00:18.891177 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891184 | orchestrator | 2025-08-29 15:00:18.891190 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:00:18.891197 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.150) 0:04:42.536 ********* 2025-08-29 15:00:18.891203 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891210 | orchestrator | 2025-08-29 15:00:18.891217 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:00:18.891223 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:00.221) 0:04:42.757 ********* 2025-08-29 15:00:18.891230 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891236 | orchestrator | 2025-08-29 15:00:18.891243 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:00:18.891250 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:00.266) 0:04:43.024 ********* 2025-08-29 15:00:18.891256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.891263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.891270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.891276 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891283 | orchestrator | 2025-08-29 15:00:18.891290 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:00:18.891296 | orchestrator | Friday 29 August 2025 14:52:20 +0000 (0:00:00.787) 0:04:43.812 ********* 2025-08-29 15:00:18.891303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891310 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.891316 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.891323 | orchestrator | 2025-08-29 15:00:18.891329 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:00:18.891336 | orchestrator | Friday 29 August 2025 14:52:20 +0000 (0:00:00.633) 0:04:44.445 ********* 2025-08-29 15:00:18.891343 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891349 | orchestrator | 2025-08-29 15:00:18.891356 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:00:18.891362 | orchestrator | Friday 29 August 2025 14:52:21 +0000 (0:00:00.280) 0:04:44.726 ********* 2025-08-29 15:00:18.891369 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891376 | orchestrator | 2025-08-29 15:00:18.891383 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:00:18.891389 | orchestrator | Friday 29 August 2025 14:52:21 +0000 (0:00:00.304) 0:04:45.030 ********* 2025-08-29 15:00:18.891396 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.891403 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.891409 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.891416 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.891423 | orchestrator | 2025-08-29 15:00:18.891434 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:00:18.891441 | orchestrator | Friday 29 August 2025 14:52:22 +0000 (0:00:01.165) 0:04:46.196 ********* 2025-08-29 15:00:18.891448 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.891454 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.891461 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.891467 | orchestrator | 2025-08-29 15:00:18.891474 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:00:18.891481 | orchestrator | Friday 29 August 2025 14:52:22 +0000 (0:00:00.340) 0:04:46.536 ********* 2025-08-29 15:00:18.891487 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.891494 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.891501 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.891507 | orchestrator | 2025-08-29 15:00:18.891514 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:00:18.891521 | orchestrator | Friday 29 August 2025 14:52:24 +0000 (0:00:01.297) 0:04:47.834 ********* 2025-08-29 15:00:18.891527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.891534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.891541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.891551 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891558 | orchestrator | 2025-08-29 15:00:18.891565 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:00:18.891571 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:00.912) 0:04:48.746 ********* 2025-08-29 15:00:18.891578 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.891585 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.891591 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.891598 | orchestrator | 2025-08-29 15:00:18.891605 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:00:18.891612 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:00.415) 0:04:49.161 ********* 2025-08-29 15:00:18.891618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.891625 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.891632 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.891639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.891645 | orchestrator | 2025-08-29 15:00:18.891652 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:00:18.891659 | orchestrator | Friday 29 August 2025 14:52:26 +0000 (0:00:00.977) 0:04:50.138 ********* 2025-08-29 15:00:18.891666 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.891672 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.891679 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.891685 | orchestrator | 2025-08-29 15:00:18.891692 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:00:18.891699 | orchestrator | Friday 29 August 2025 14:52:26 +0000 (0:00:00.300) 0:04:50.439 ********* 2025-08-29 15:00:18.891706 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.891712 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.891719 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.891726 | orchestrator | 2025-08-29 15:00:18.891732 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:00:18.891739 | orchestrator | Friday 29 August 2025 14:52:28 +0000 (0:00:01.394) 0:04:51.834 ********* 2025-08-29 15:00:18.891745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.891752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.891759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.891766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891772 | orchestrator | 2025-08-29 15:00:18.891779 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:00:18.891793 | orchestrator | Friday 29 August 2025 14:52:28 +0000 (0:00:00.655) 0:04:52.490 ********* 2025-08-29 15:00:18.891800 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.891807 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.891813 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.891820 | orchestrator | 2025-08-29 15:00:18.891827 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 15:00:18.891833 | orchestrator | Friday 29 August 2025 14:52:29 +0000 (0:00:00.418) 0:04:52.909 ********* 2025-08-29 15:00:18.891853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.891861 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.891867 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.891874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891880 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.891887 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.891894 | orchestrator | 2025-08-29 15:00:18.891928 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:00:18.891935 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:01.052) 0:04:53.962 ********* 2025-08-29 15:00:18.891942 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.891949 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.891955 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.891962 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.891969 | orchestrator | 2025-08-29 15:00:18.891975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:00:18.891982 | orchestrator | Friday 29 August 2025 14:52:31 +0000 (0:00:01.055) 0:04:55.017 ********* 2025-08-29 15:00:18.891989 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.891995 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892002 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892008 | orchestrator | 2025-08-29 15:00:18.892015 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:00:18.892022 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:00.718) 0:04:55.736 ********* 2025-08-29 15:00:18.892028 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.892035 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.892041 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.892048 | orchestrator | 2025-08-29 15:00:18.892058 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:00:18.892064 | orchestrator | Friday 29 August 2025 14:52:33 +0000 (0:00:01.757) 0:04:57.493 ********* 2025-08-29 15:00:18.892071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:18.892078 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:18.892084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:18.892091 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892097 | orchestrator | 2025-08-29 15:00:18.892104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:00:18.892111 | orchestrator | Friday 29 August 2025 14:52:34 +0000 (0:00:00.797) 0:04:58.291 ********* 2025-08-29 15:00:18.892117 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892124 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892131 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892137 | orchestrator | 2025-08-29 15:00:18.892144 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 15:00:18.892151 | orchestrator | 2025-08-29 15:00:18.892157 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.892169 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:00.767) 0:04:59.058 ********* 2025-08-29 15:00:18.892176 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.892183 | orchestrator | 2025-08-29 15:00:18.892190 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.892202 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.855) 0:04:59.914 ********* 2025-08-29 15:00:18.892209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.892216 | orchestrator | 2025-08-29 15:00:18.892222 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.892229 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.512) 0:05:00.427 ********* 2025-08-29 15:00:18.892236 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892242 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892249 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892256 | orchestrator | 2025-08-29 15:00:18.892262 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.892269 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:01.002) 0:05:01.429 ********* 2025-08-29 15:00:18.892276 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892282 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892295 | orchestrator | 2025-08-29 15:00:18.892302 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.892309 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.479) 0:05:01.909 ********* 2025-08-29 15:00:18.892315 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892322 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892329 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892335 | orchestrator | 2025-08-29 15:00:18.892342 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.892348 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.385) 0:05:02.295 ********* 2025-08-29 15:00:18.892355 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892362 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892368 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892375 | orchestrator | 2025-08-29 15:00:18.892382 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.892388 | orchestrator | Friday 29 August 2025 14:52:39 +0000 (0:00:00.355) 0:05:02.650 ********* 2025-08-29 15:00:18.892395 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892402 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892408 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892415 | orchestrator | 2025-08-29 15:00:18.892421 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.892428 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:01.078) 0:05:03.729 ********* 2025-08-29 15:00:18.892435 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892441 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892454 | orchestrator | 2025-08-29 15:00:18.892461 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.892468 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:00.443) 0:05:04.172 ********* 2025-08-29 15:00:18.892474 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892481 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892494 | orchestrator | 2025-08-29 15:00:18.892501 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.892507 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:00.381) 0:05:04.554 ********* 2025-08-29 15:00:18.892514 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892521 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892527 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892534 | orchestrator | 2025-08-29 15:00:18.892541 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.892547 | orchestrator | Friday 29 August 2025 14:52:41 +0000 (0:00:00.777) 0:05:05.331 ********* 2025-08-29 15:00:18.892559 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892566 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892573 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892580 | orchestrator | 2025-08-29 15:00:18.892586 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.892593 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:01.012) 0:05:06.344 ********* 2025-08-29 15:00:18.892600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892606 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892613 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892619 | orchestrator | 2025-08-29 15:00:18.892626 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.892636 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:00.342) 0:05:06.687 ********* 2025-08-29 15:00:18.892643 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892650 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892656 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892663 | orchestrator | 2025-08-29 15:00:18.892670 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.892676 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:00.561) 0:05:07.248 ********* 2025-08-29 15:00:18.892683 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892689 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892703 | orchestrator | 2025-08-29 15:00:18.892709 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.892716 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:00.443) 0:05:07.692 ********* 2025-08-29 15:00:18.892723 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892729 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892736 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892742 | orchestrator | 2025-08-29 15:00:18.892749 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.892760 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:00.792) 0:05:08.485 ********* 2025-08-29 15:00:18.892767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892773 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892787 | orchestrator | 2025-08-29 15:00:18.892793 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.892800 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:00.318) 0:05:08.803 ********* 2025-08-29 15:00:18.892807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892813 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892820 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892826 | orchestrator | 2025-08-29 15:00:18.892833 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.892872 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:00.412) 0:05:09.216 ********* 2025-08-29 15:00:18.892881 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.892887 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.892894 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.892900 | orchestrator | 2025-08-29 15:00:18.892907 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.892914 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:00.333) 0:05:09.549 ********* 2025-08-29 15:00:18.892920 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892927 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892934 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892940 | orchestrator | 2025-08-29 15:00:18.892947 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.892953 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:00.423) 0:05:09.973 ********* 2025-08-29 15:00:18.892960 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.892972 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.892978 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.892985 | orchestrator | 2025-08-29 15:00:18.892992 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.892998 | orchestrator | Friday 29 August 2025 14:52:47 +0000 (0:00:00.676) 0:05:10.649 ********* 2025-08-29 15:00:18.893005 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893011 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893018 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893025 | orchestrator | 2025-08-29 15:00:18.893031 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:00:18.893038 | orchestrator | Friday 29 August 2025 14:52:47 +0000 (0:00:00.574) 0:05:11.223 ********* 2025-08-29 15:00:18.893044 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893051 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893057 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893064 | orchestrator | 2025-08-29 15:00:18.893070 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 15:00:18.893077 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:00.361) 0:05:11.585 ********* 2025-08-29 15:00:18.893084 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.893090 | orchestrator | 2025-08-29 15:00:18.893097 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 15:00:18.893104 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:01.076) 0:05:12.661 ********* 2025-08-29 15:00:18.893110 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.893117 | orchestrator | 2025-08-29 15:00:18.893123 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 15:00:18.893130 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:00.198) 0:05:12.860 ********* 2025-08-29 15:00:18.893137 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 15:00:18.893143 | orchestrator | 2025-08-29 15:00:18.893150 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 15:00:18.893156 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:01.304) 0:05:14.165 ********* 2025-08-29 15:00:18.893163 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893169 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893176 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893183 | orchestrator | 2025-08-29 15:00:18.893189 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 15:00:18.893196 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.472) 0:05:14.637 ********* 2025-08-29 15:00:18.893203 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893209 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893216 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893222 | orchestrator | 2025-08-29 15:00:18.893229 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 15:00:18.893236 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.792) 0:05:15.430 ********* 2025-08-29 15:00:18.893242 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893249 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893256 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893262 | orchestrator | 2025-08-29 15:00:18.893272 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 15:00:18.893279 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:01.169) 0:05:16.599 ********* 2025-08-29 15:00:18.893286 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893292 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893299 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893306 | orchestrator | 2025-08-29 15:00:18.893312 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 15:00:18.893319 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:00.780) 0:05:17.380 ********* 2025-08-29 15:00:18.893325 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893337 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893344 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893351 | orchestrator | 2025-08-29 15:00:18.893357 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 15:00:18.893364 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:00.731) 0:05:18.111 ********* 2025-08-29 15:00:18.893370 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893377 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893384 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893390 | orchestrator | 2025-08-29 15:00:18.893401 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 15:00:18.893408 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:01.029) 0:05:19.141 ********* 2025-08-29 15:00:18.893414 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893420 | orchestrator | 2025-08-29 15:00:18.893426 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 15:00:18.893432 | orchestrator | Friday 29 August 2025 14:52:56 +0000 (0:00:01.340) 0:05:20.481 ********* 2025-08-29 15:00:18.893438 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893445 | orchestrator | 2025-08-29 15:00:18.893451 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 15:00:18.893457 | orchestrator | Friday 29 August 2025 14:52:57 +0000 (0:00:00.749) 0:05:21.231 ********* 2025-08-29 15:00:18.893463 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:18.893469 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.893475 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.893481 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:00:18.893487 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 15:00:18.893494 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:00:18.893500 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:00:18.893506 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 15:00:18.893512 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:00:18.893518 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 15:00:18.893524 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 15:00:18.893530 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 15:00:18.893536 | orchestrator | 2025-08-29 15:00:18.893542 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 15:00:18.893548 | orchestrator | Friday 29 August 2025 14:53:00 +0000 (0:00:03.307) 0:05:24.539 ********* 2025-08-29 15:00:18.893555 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893561 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893567 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893573 | orchestrator | 2025-08-29 15:00:18.893579 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 15:00:18.893586 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:01.274) 0:05:25.814 ********* 2025-08-29 15:00:18.893592 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893598 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893604 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893610 | orchestrator | 2025-08-29 15:00:18.893616 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 15:00:18.893623 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:00.668) 0:05:26.483 ********* 2025-08-29 15:00:18.893629 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.893635 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.893641 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.893647 | orchestrator | 2025-08-29 15:00:18.893653 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 15:00:18.893660 | orchestrator | Friday 29 August 2025 14:53:03 +0000 (0:00:00.390) 0:05:26.873 ********* 2025-08-29 15:00:18.893670 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893676 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893683 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893689 | orchestrator | 2025-08-29 15:00:18.893695 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 15:00:18.893701 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:01.935) 0:05:28.809 ********* 2025-08-29 15:00:18.893707 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893714 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893720 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893726 | orchestrator | 2025-08-29 15:00:18.893732 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 15:00:18.893738 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:01.549) 0:05:30.359 ********* 2025-08-29 15:00:18.893744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.893751 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.893757 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.893763 | orchestrator | 2025-08-29 15:00:18.893769 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 15:00:18.893775 | orchestrator | Friday 29 August 2025 14:53:07 +0000 (0:00:00.364) 0:05:30.724 ********* 2025-08-29 15:00:18.893781 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.893788 | orchestrator | 2025-08-29 15:00:18.893797 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 15:00:18.893803 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:00.882) 0:05:31.606 ********* 2025-08-29 15:00:18.893810 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.893816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.893822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.893828 | orchestrator | 2025-08-29 15:00:18.893834 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 15:00:18.893854 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:00.327) 0:05:31.934 ********* 2025-08-29 15:00:18.893860 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.893866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.893872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.893878 | orchestrator | 2025-08-29 15:00:18.893885 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:18.893891 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:00.330) 0:05:32.264 ********* 2025-08-29 15:00:18.893897 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.893903 | orchestrator | 2025-08-29 15:00:18.893910 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 15:00:18.893920 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.915) 0:05:33.180 ********* 2025-08-29 15:00:18.893926 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893932 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893938 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893945 | orchestrator | 2025-08-29 15:00:18.893951 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 15:00:18.893957 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:01.815) 0:05:34.996 ********* 2025-08-29 15:00:18.893963 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.893970 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.893976 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.893982 | orchestrator | 2025-08-29 15:00:18.893988 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 15:00:18.893994 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:01.512) 0:05:36.509 ********* 2025-08-29 15:00:18.894000 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.894007 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.894099 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.894109 | orchestrator | 2025-08-29 15:00:18.894115 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 15:00:18.894121 | orchestrator | Friday 29 August 2025 14:53:15 +0000 (0:00:02.270) 0:05:38.779 ********* 2025-08-29 15:00:18.894127 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.894134 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.894140 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.894146 | orchestrator | 2025-08-29 15:00:18.894152 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 15:00:18.894158 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:02.130) 0:05:40.909 ********* 2025-08-29 15:00:18.894164 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.894170 | orchestrator | 2025-08-29 15:00:18.894177 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 15:00:18.894183 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:00.755) 0:05:41.665 ********* 2025-08-29 15:00:18.894189 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-08-29 15:00:18.894195 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.894202 | orchestrator | 2025-08-29 15:00:18.894208 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 15:00:18.894214 | orchestrator | Friday 29 August 2025 14:53:40 +0000 (0:00:22.087) 0:06:03.753 ********* 2025-08-29 15:00:18.894220 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.894227 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.894233 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.894239 | orchestrator | 2025-08-29 15:00:18.894245 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 15:00:18.894251 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:10.351) 0:06:14.104 ********* 2025-08-29 15:00:18.894257 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894264 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.894270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.894276 | orchestrator | 2025-08-29 15:00:18.894282 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 15:00:18.894288 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:00.366) 0:06:14.470 ********* 2025-08-29 15:00:18.894296 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:00:18.894304 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:00:18.894315 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 15:00:18.894323 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 15:00:18.894358 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 15:00:18.894367 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d4f8a4c662d1216bddd1a944b41ce1cc94152d7c'}])  2025-08-29 15:00:18.894374 | orchestrator | 2025-08-29 15:00:18.894381 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:18.894387 | orchestrator | Friday 29 August 2025 14:54:06 +0000 (0:00:15.880) 0:06:30.351 ********* 2025-08-29 15:00:18.894394 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894400 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.894406 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.894412 | orchestrator | 2025-08-29 15:00:18.894418 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:00:18.894424 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.374) 0:06:30.725 ********* 2025-08-29 15:00:18.894431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.894437 | orchestrator | 2025-08-29 15:00:18.894443 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:00:18.894449 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.534) 0:06:31.260 ********* 2025-08-29 15:00:18.894455 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.894462 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.894468 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.894474 | orchestrator | 2025-08-29 15:00:18.894480 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:00:18.894486 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.575) 0:06:31.835 ********* 2025-08-29 15:00:18.894492 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894499 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.894505 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.894511 | orchestrator | 2025-08-29 15:00:18.894517 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:00:18.894523 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.336) 0:06:32.172 ********* 2025-08-29 15:00:18.894529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:18.894536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:18.894542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:18.894548 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894554 | orchestrator | 2025-08-29 15:00:18.894560 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:00:18.894566 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:00.715) 0:06:32.888 ********* 2025-08-29 15:00:18.894572 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.894578 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.894585 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.894591 | orchestrator | 2025-08-29 15:00:18.894597 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 15:00:18.894603 | orchestrator | 2025-08-29 15:00:18.894609 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.894616 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:00.564) 0:06:33.453 ********* 2025-08-29 15:00:18.894627 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.894633 | orchestrator | 2025-08-29 15:00:18.894639 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.894647 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.805) 0:06:34.258 ********* 2025-08-29 15:00:18.894657 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.894667 | orchestrator | 2025-08-29 15:00:18.894677 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.894687 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:00.545) 0:06:34.803 ********* 2025-08-29 15:00:18.894695 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.894704 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.894713 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.894721 | orchestrator | 2025-08-29 15:00:18.894738 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.894748 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:01.058) 0:06:35.862 ********* 2025-08-29 15:00:18.894757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894767 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.894777 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.894787 | orchestrator | 2025-08-29 15:00:18.894797 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.894808 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.331) 0:06:36.194 ********* 2025-08-29 15:00:18.894818 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.894837 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.894880 | orchestrator | 2025-08-29 15:00:18.894886 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.894893 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.350) 0:06:36.544 ********* 2025-08-29 15:00:18.894899 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.894905 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.894911 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.894917 | orchestrator | 2025-08-29 15:00:18.894951 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.894959 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.383) 0:06:36.928 ********* 2025-08-29 15:00:18.894965 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.894971 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.894977 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.894983 | orchestrator | 2025-08-29 15:00:18.894989 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.894996 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:01.087) 0:06:38.015 ********* 2025-08-29 15:00:18.895002 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895008 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895014 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895020 | orchestrator | 2025-08-29 15:00:18.895026 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.895033 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.387) 0:06:38.403 ********* 2025-08-29 15:00:18.895039 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895045 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895051 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895057 | orchestrator | 2025-08-29 15:00:18.895063 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.895070 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.396) 0:06:38.799 ********* 2025-08-29 15:00:18.895076 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895083 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895095 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895102 | orchestrator | 2025-08-29 15:00:18.895108 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.895114 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.810) 0:06:39.610 ********* 2025-08-29 15:00:18.895120 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895126 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895132 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895138 | orchestrator | 2025-08-29 15:00:18.895144 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.895151 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:01.078) 0:06:40.688 ********* 2025-08-29 15:00:18.895157 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895163 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895175 | orchestrator | 2025-08-29 15:00:18.895181 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.895187 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:00.351) 0:06:41.040 ********* 2025-08-29 15:00:18.895193 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895199 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895205 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895211 | orchestrator | 2025-08-29 15:00:18.895217 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.895224 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:00.428) 0:06:41.468 ********* 2025-08-29 15:00:18.895230 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895236 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895242 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895248 | orchestrator | 2025-08-29 15:00:18.895254 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.895260 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:00.479) 0:06:41.947 ********* 2025-08-29 15:00:18.895266 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895273 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895279 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895285 | orchestrator | 2025-08-29 15:00:18.895291 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.895297 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.646) 0:06:42.593 ********* 2025-08-29 15:00:18.895303 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895309 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895316 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895322 | orchestrator | 2025-08-29 15:00:18.895328 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.895334 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.420) 0:06:43.014 ********* 2025-08-29 15:00:18.895340 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895346 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895352 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895359 | orchestrator | 2025-08-29 15:00:18.895365 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.895371 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.345) 0:06:43.359 ********* 2025-08-29 15:00:18.895377 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895389 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895394 | orchestrator | 2025-08-29 15:00:18.895403 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.895409 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:00.349) 0:06:43.708 ********* 2025-08-29 15:00:18.895414 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895420 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895425 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895430 | orchestrator | 2025-08-29 15:00:18.895450 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.895456 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:00.708) 0:06:44.417 ********* 2025-08-29 15:00:18.895461 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895467 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895472 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895477 | orchestrator | 2025-08-29 15:00:18.895483 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.895488 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:00.376) 0:06:44.793 ********* 2025-08-29 15:00:18.895494 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895499 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895504 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895510 | orchestrator | 2025-08-29 15:00:18.895515 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:00:18.895539 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:00.631) 0:06:45.425 ********* 2025-08-29 15:00:18.895545 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:18.895551 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:18.895556 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:18.895561 | orchestrator | 2025-08-29 15:00:18.895567 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 15:00:18.895572 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:01.189) 0:06:46.614 ********* 2025-08-29 15:00:18.895577 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.895583 | orchestrator | 2025-08-29 15:00:18.895588 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 15:00:18.895594 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:00.826) 0:06:47.441 ********* 2025-08-29 15:00:18.895599 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.895604 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.895610 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.895615 | orchestrator | 2025-08-29 15:00:18.895621 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 15:00:18.895626 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.730) 0:06:48.171 ********* 2025-08-29 15:00:18.895632 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895637 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895642 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895647 | orchestrator | 2025-08-29 15:00:18.895653 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 15:00:18.895658 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.425) 0:06:48.596 ********* 2025-08-29 15:00:18.895664 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:18.895669 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:18.895674 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:18.895680 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:00:18.895685 | orchestrator | 2025-08-29 15:00:18.895690 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 15:00:18.895696 | orchestrator | Friday 29 August 2025 14:54:36 +0000 (0:00:11.511) 0:07:00.108 ********* 2025-08-29 15:00:18.895701 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895707 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895712 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895717 | orchestrator | 2025-08-29 15:00:18.895723 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 15:00:18.895728 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:00.700) 0:07:00.808 ********* 2025-08-29 15:00:18.895733 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:00:18.895739 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:00:18.895749 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:00:18.895755 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:00:18.895760 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.895765 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.895771 | orchestrator | 2025-08-29 15:00:18.895776 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:18.895781 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:02.062) 0:07:02.871 ********* 2025-08-29 15:00:18.895787 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:00:18.895792 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:00:18.895798 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:00:18.895803 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 15:00:18.895808 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 15:00:18.895814 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:18.895819 | orchestrator | 2025-08-29 15:00:18.895824 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 15:00:18.895830 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:01.372) 0:07:04.243 ********* 2025-08-29 15:00:18.895835 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.895854 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.895859 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.895865 | orchestrator | 2025-08-29 15:00:18.895870 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 15:00:18.895875 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:00.771) 0:07:05.015 ********* 2025-08-29 15:00:18.895884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895890 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895895 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895900 | orchestrator | 2025-08-29 15:00:18.895906 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 15:00:18.895911 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:00.795) 0:07:05.811 ********* 2025-08-29 15:00:18.895916 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.895932 | orchestrator | 2025-08-29 15:00:18.895938 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 15:00:18.895943 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:00.367) 0:07:06.179 ********* 2025-08-29 15:00:18.895948 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.895954 | orchestrator | 2025-08-29 15:00:18.895959 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 15:00:18.895964 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:00.579) 0:07:06.758 ********* 2025-08-29 15:00:18.895970 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.895993 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.895999 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.896005 | orchestrator | 2025-08-29 15:00:18.896010 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 15:00:18.896015 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:00.732) 0:07:07.491 ********* 2025-08-29 15:00:18.896021 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.896026 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.896032 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.896037 | orchestrator | 2025-08-29 15:00:18.896042 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:18.896048 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.403) 0:07:07.895 ********* 2025-08-29 15:00:18.896053 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.896063 | orchestrator | 2025-08-29 15:00:18.896068 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 15:00:18.896074 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.555) 0:07:08.450 ********* 2025-08-29 15:00:18.896079 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.896085 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.896090 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.896095 | orchestrator | 2025-08-29 15:00:18.896101 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 15:00:18.896106 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:01.681) 0:07:10.131 ********* 2025-08-29 15:00:18.896111 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.896117 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.896122 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.896127 | orchestrator | 2025-08-29 15:00:18.896133 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 15:00:18.896138 | orchestrator | Friday 29 August 2025 14:54:47 +0000 (0:00:01.331) 0:07:11.463 ********* 2025-08-29 15:00:18.896143 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.896149 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.896154 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.896159 | orchestrator | 2025-08-29 15:00:18.896165 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 15:00:18.896170 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:01.828) 0:07:13.291 ********* 2025-08-29 15:00:18.896175 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.896181 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.896187 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.896192 | orchestrator | 2025-08-29 15:00:18.896197 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 15:00:18.896203 | orchestrator | Friday 29 August 2025 14:54:51 +0000 (0:00:02.018) 0:07:15.309 ********* 2025-08-29 15:00:18.896208 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.896213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.896219 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 15:00:18.896224 | orchestrator | 2025-08-29 15:00:18.896230 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 15:00:18.896235 | orchestrator | Friday 29 August 2025 14:54:52 +0000 (0:00:00.806) 0:07:16.116 ********* 2025-08-29 15:00:18.896240 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 15:00:18.896246 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 15:00:18.896251 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 15:00:18.896256 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 15:00:18.896262 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-08-29 15:00:18.896267 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.896273 | orchestrator | 2025-08-29 15:00:18.896278 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 15:00:18.896284 | orchestrator | Friday 29 August 2025 14:55:22 +0000 (0:00:30.047) 0:07:46.163 ********* 2025-08-29 15:00:18.896289 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.896294 | orchestrator | 2025-08-29 15:00:18.896300 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 15:00:18.896305 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:01.190) 0:07:47.354 ********* 2025-08-29 15:00:18.896313 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.896323 | orchestrator | 2025-08-29 15:00:18.896328 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 15:00:18.896334 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:00.331) 0:07:47.685 ********* 2025-08-29 15:00:18.896339 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.896344 | orchestrator | 2025-08-29 15:00:18.896350 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 15:00:18.896355 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:00.194) 0:07:47.880 ********* 2025-08-29 15:00:18.896360 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 15:00:18.896365 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 15:00:18.896371 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 15:00:18.896376 | orchestrator | 2025-08-29 15:00:18.896381 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 15:00:18.896387 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:06.324) 0:07:54.205 ********* 2025-08-29 15:00:18.896392 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 15:00:18.896415 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 15:00:18.896421 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 15:00:18.896427 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 15:00:18.896432 | orchestrator | 2025-08-29 15:00:18.896438 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:18.896443 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:05.164) 0:07:59.370 ********* 2025-08-29 15:00:18.896448 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.896454 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.896459 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.896464 | orchestrator | 2025-08-29 15:00:18.896470 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:00:18.896475 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.747) 0:08:00.117 ********* 2025-08-29 15:00:18.896481 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:18.896486 | orchestrator | 2025-08-29 15:00:18.896491 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:00:18.896497 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.633) 0:08:00.751 ********* 2025-08-29 15:00:18.896502 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.896508 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.896513 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.896518 | orchestrator | 2025-08-29 15:00:18.896524 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:00:18.896529 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.808) 0:08:01.560 ********* 2025-08-29 15:00:18.896534 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.896540 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.896545 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.896551 | orchestrator | 2025-08-29 15:00:18.896556 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:00:18.896561 | orchestrator | Friday 29 August 2025 14:55:39 +0000 (0:00:01.219) 0:08:02.780 ********* 2025-08-29 15:00:18.896567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:18.896572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:18.896577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:18.896583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.896588 | orchestrator | 2025-08-29 15:00:18.896594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:00:18.896599 | orchestrator | Friday 29 August 2025 14:55:39 +0000 (0:00:00.687) 0:08:03.467 ********* 2025-08-29 15:00:18.896608 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.896614 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.896619 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.896625 | orchestrator | 2025-08-29 15:00:18.896630 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 15:00:18.896635 | orchestrator | 2025-08-29 15:00:18.896641 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.896646 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:00.948) 0:08:04.416 ********* 2025-08-29 15:00:18.896651 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.896657 | orchestrator | 2025-08-29 15:00:18.896663 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.896668 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:00.581) 0:08:04.998 ********* 2025-08-29 15:00:18.896673 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.896679 | orchestrator | 2025-08-29 15:00:18.896684 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.896690 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.816) 0:08:05.814 ********* 2025-08-29 15:00:18.896695 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.896700 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.896706 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.896711 | orchestrator | 2025-08-29 15:00:18.896716 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.896722 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.378) 0:08:06.193 ********* 2025-08-29 15:00:18.896727 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.896733 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.896740 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.896749 | orchestrator | 2025-08-29 15:00:18.896763 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.896773 | orchestrator | Friday 29 August 2025 14:55:43 +0000 (0:00:00.811) 0:08:07.004 ********* 2025-08-29 15:00:18.896782 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.896792 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.896801 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.896811 | orchestrator | 2025-08-29 15:00:18.896821 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.896827 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:00.768) 0:08:07.772 ********* 2025-08-29 15:00:18.896832 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.896837 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.896856 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.896861 | orchestrator | 2025-08-29 15:00:18.896867 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.896872 | orchestrator | Friday 29 August 2025 14:55:45 +0000 (0:00:01.174) 0:08:08.947 ********* 2025-08-29 15:00:18.896878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.896883 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.896889 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.896894 | orchestrator | 2025-08-29 15:00:18.896900 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.896925 | orchestrator | Friday 29 August 2025 14:55:45 +0000 (0:00:00.403) 0:08:09.350 ********* 2025-08-29 15:00:18.896932 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.896937 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.896942 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.896948 | orchestrator | 2025-08-29 15:00:18.896953 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.896958 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:00.367) 0:08:09.718 ********* 2025-08-29 15:00:18.896969 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.896975 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.896980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.896986 | orchestrator | 2025-08-29 15:00:18.896991 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.896996 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:00.326) 0:08:10.045 ********* 2025-08-29 15:00:18.897002 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897007 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897012 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897018 | orchestrator | 2025-08-29 15:00:18.897023 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.897028 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:01.126) 0:08:11.171 ********* 2025-08-29 15:00:18.897034 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897039 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897045 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897050 | orchestrator | 2025-08-29 15:00:18.897056 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.897061 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:00.816) 0:08:11.988 ********* 2025-08-29 15:00:18.897066 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897072 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897077 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897082 | orchestrator | 2025-08-29 15:00:18.897088 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.897093 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:00.400) 0:08:12.389 ********* 2025-08-29 15:00:18.897098 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897104 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897109 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897114 | orchestrator | 2025-08-29 15:00:18.897120 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.897125 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.377) 0:08:12.766 ********* 2025-08-29 15:00:18.897131 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897136 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897141 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897147 | orchestrator | 2025-08-29 15:00:18.897152 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.897158 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.685) 0:08:13.452 ********* 2025-08-29 15:00:18.897163 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897168 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897174 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897179 | orchestrator | 2025-08-29 15:00:18.897185 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.897190 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.376) 0:08:13.828 ********* 2025-08-29 15:00:18.897195 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897201 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897206 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897211 | orchestrator | 2025-08-29 15:00:18.897217 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.897222 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.428) 0:08:14.256 ********* 2025-08-29 15:00:18.897227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897233 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897238 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897243 | orchestrator | 2025-08-29 15:00:18.897249 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.897254 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.330) 0:08:14.586 ********* 2025-08-29 15:00:18.897259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897265 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897274 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897279 | orchestrator | 2025-08-29 15:00:18.897285 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.897290 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.314) 0:08:14.901 ********* 2025-08-29 15:00:18.897296 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897301 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897306 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897312 | orchestrator | 2025-08-29 15:00:18.897317 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.897326 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.649) 0:08:15.550 ********* 2025-08-29 15:00:18.897331 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897337 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897342 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897348 | orchestrator | 2025-08-29 15:00:18.897353 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.897358 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:00.406) 0:08:15.957 ********* 2025-08-29 15:00:18.897364 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897369 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897374 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897380 | orchestrator | 2025-08-29 15:00:18.897385 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 15:00:18.897390 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:00.647) 0:08:16.605 ********* 2025-08-29 15:00:18.897396 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897401 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897406 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897411 | orchestrator | 2025-08-29 15:00:18.897417 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:00:18.897422 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:00.828) 0:08:17.433 ********* 2025-08-29 15:00:18.897431 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:00:18.897436 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:18.897442 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:18.897447 | orchestrator | 2025-08-29 15:00:18.897453 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 15:00:18.897458 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:00.795) 0:08:18.229 ********* 2025-08-29 15:00:18.897463 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.897469 | orchestrator | 2025-08-29 15:00:18.897474 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 15:00:18.897479 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:00.765) 0:08:18.994 ********* 2025-08-29 15:00:18.897485 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897490 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897495 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897501 | orchestrator | 2025-08-29 15:00:18.897506 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 15:00:18.897511 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:00.771) 0:08:19.766 ********* 2025-08-29 15:00:18.897517 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897522 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897528 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897533 | orchestrator | 2025-08-29 15:00:18.897538 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 15:00:18.897544 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:00.418) 0:08:20.184 ********* 2025-08-29 15:00:18.897549 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897555 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897564 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897569 | orchestrator | 2025-08-29 15:00:18.897575 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 15:00:18.897580 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.747) 0:08:20.932 ********* 2025-08-29 15:00:18.897585 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.897591 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.897596 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.897601 | orchestrator | 2025-08-29 15:00:18.897607 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 15:00:18.897612 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.377) 0:08:21.310 ********* 2025-08-29 15:00:18.897618 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:00:18.897623 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:00:18.897628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:00:18.897634 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:00:18.897639 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:00:18.897644 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:00:18.897650 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:00:18.897655 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:00:18.897660 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:00:18.897666 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:00:18.897671 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:00:18.897676 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:00:18.897682 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:00:18.897687 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:00:18.897692 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:00:18.897698 | orchestrator | 2025-08-29 15:00:18.897708 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 15:00:18.897714 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:02.474) 0:08:23.784 ********* 2025-08-29 15:00:18.897719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.897724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.897730 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.897735 | orchestrator | 2025-08-29 15:00:18.897740 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 15:00:18.897746 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:00.427) 0:08:24.212 ********* 2025-08-29 15:00:18.897751 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.897756 | orchestrator | 2025-08-29 15:00:18.897762 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 15:00:18.897767 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:00.574) 0:08:24.786 ********* 2025-08-29 15:00:18.897772 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:00:18.897778 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:00:18.897783 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:00:18.897793 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 15:00:18.897803 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 15:00:18.897809 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 15:00:18.897814 | orchestrator | 2025-08-29 15:00:18.897820 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 15:00:18.897825 | orchestrator | Friday 29 August 2025 14:56:02 +0000 (0:00:01.380) 0:08:26.167 ********* 2025-08-29 15:00:18.897830 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.897836 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:18.897853 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:18.897858 | orchestrator | 2025-08-29 15:00:18.897864 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:18.897869 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:02.282) 0:08:28.449 ********* 2025-08-29 15:00:18.897875 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:18.897880 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:18.897886 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.897891 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:18.897896 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:00:18.897902 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.897907 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:18.897913 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:00:18.897918 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.897924 | orchestrator | 2025-08-29 15:00:18.897929 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 15:00:18.897935 | orchestrator | Friday 29 August 2025 14:56:06 +0000 (0:00:01.276) 0:08:29.726 ********* 2025-08-29 15:00:18.897940 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.897945 | orchestrator | 2025-08-29 15:00:18.897951 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 15:00:18.897956 | orchestrator | Friday 29 August 2025 14:56:08 +0000 (0:00:02.018) 0:08:31.744 ********* 2025-08-29 15:00:18.897962 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.897967 | orchestrator | 2025-08-29 15:00:18.897973 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 15:00:18.897978 | orchestrator | Friday 29 August 2025 14:56:08 +0000 (0:00:00.531) 0:08:32.276 ********* 2025-08-29 15:00:18.897984 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2496fa80-0e44-5b7b-b63b-c9ee5061ab12', 'data_vg': 'ceph-2496fa80-0e44-5b7b-b63b-c9ee5061ab12'}) 2025-08-29 15:00:18.897990 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-95143370-f7d7-5ec5-ad3d-8af7ad027df9', 'data_vg': 'ceph-95143370-f7d7-5ec5-ad3d-8af7ad027df9'}) 2025-08-29 15:00:18.897995 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bf1413fe-a30b-500c-b995-d4125007de3c', 'data_vg': 'ceph-bf1413fe-a30b-500c-b995-d4125007de3c'}) 2025-08-29 15:00:18.898001 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63', 'data_vg': 'ceph-b3a0840c-f726-58e7-9fb9-c9f22cb6ab63'}) 2025-08-29 15:00:18.898006 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e997a020-3476-50fd-bfa0-07ccf1b1c8ec', 'data_vg': 'ceph-e997a020-3476-50fd-bfa0-07ccf1b1c8ec'}) 2025-08-29 15:00:18.898012 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a5a082ef-4dec-5d63-a984-4d3e57643ca0', 'data_vg': 'ceph-a5a082ef-4dec-5d63-a984-4d3e57643ca0'}) 2025-08-29 15:00:18.898040 | orchestrator | 2025-08-29 15:00:18.898046 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 15:00:18.898052 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:39.182) 0:09:11.458 ********* 2025-08-29 15:00:18.898057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898068 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898076 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898081 | orchestrator | 2025-08-29 15:00:18.898086 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 15:00:18.898092 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:00.312) 0:09:11.771 ********* 2025-08-29 15:00:18.898100 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.898106 | orchestrator | 2025-08-29 15:00:18.898111 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 15:00:18.898117 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:00.518) 0:09:12.289 ********* 2025-08-29 15:00:18.898122 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.898128 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.898133 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.898138 | orchestrator | 2025-08-29 15:00:18.898144 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 15:00:18.898149 | orchestrator | Friday 29 August 2025 14:56:49 +0000 (0:00:01.041) 0:09:13.331 ********* 2025-08-29 15:00:18.898155 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.898160 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.898166 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.898171 | orchestrator | 2025-08-29 15:00:18.898177 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:18.898182 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:02.744) 0:09:16.076 ********* 2025-08-29 15:00:18.898191 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.898197 | orchestrator | 2025-08-29 15:00:18.898203 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 15:00:18.898208 | orchestrator | Friday 29 August 2025 14:56:53 +0000 (0:00:00.521) 0:09:16.597 ********* 2025-08-29 15:00:18.898213 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.898219 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.898224 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.898230 | orchestrator | 2025-08-29 15:00:18.898235 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 15:00:18.898241 | orchestrator | Friday 29 August 2025 14:56:54 +0000 (0:00:01.625) 0:09:18.223 ********* 2025-08-29 15:00:18.898246 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.898251 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.898257 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.898262 | orchestrator | 2025-08-29 15:00:18.898267 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 15:00:18.898273 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:01.174) 0:09:19.397 ********* 2025-08-29 15:00:18.898278 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.898284 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.898289 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.898294 | orchestrator | 2025-08-29 15:00:18.898300 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 15:00:18.898305 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:01.754) 0:09:21.152 ********* 2025-08-29 15:00:18.898311 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898316 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898322 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898327 | orchestrator | 2025-08-29 15:00:18.898332 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 15:00:18.898338 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:00.397) 0:09:21.549 ********* 2025-08-29 15:00:18.898343 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898349 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898354 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898359 | orchestrator | 2025-08-29 15:00:18.898369 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 15:00:18.898374 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:00.506) 0:09:22.056 ********* 2025-08-29 15:00:18.898380 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:00:18.898385 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-08-29 15:00:18.898390 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-08-29 15:00:18.898396 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-08-29 15:00:18.898401 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-08-29 15:00:18.898406 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-08-29 15:00:18.898412 | orchestrator | 2025-08-29 15:00:18.898417 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 15:00:18.898423 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:01.009) 0:09:23.066 ********* 2025-08-29 15:00:18.898428 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 15:00:18.898434 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 15:00:18.898439 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 15:00:18.898444 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 15:00:18.898450 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 15:00:18.898455 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-08-29 15:00:18.898461 | orchestrator | 2025-08-29 15:00:18.898466 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 15:00:18.898471 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:02.176) 0:09:25.242 ********* 2025-08-29 15:00:18.898477 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 15:00:18.898482 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 15:00:18.898487 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 15:00:18.898493 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 15:00:18.898498 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 15:00:18.898504 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-08-29 15:00:18.898509 | orchestrator | 2025-08-29 15:00:18.898514 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 15:00:18.898520 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:03.382) 0:09:28.625 ********* 2025-08-29 15:00:18.898525 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898530 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898536 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.898541 | orchestrator | 2025-08-29 15:00:18.898547 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 15:00:18.898552 | orchestrator | Friday 29 August 2025 14:57:08 +0000 (0:00:03.226) 0:09:31.852 ********* 2025-08-29 15:00:18.898561 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898566 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898572 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 15:00:18.898577 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.898583 | orchestrator | 2025-08-29 15:00:18.898588 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 15:00:18.898593 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:12.565) 0:09:44.417 ********* 2025-08-29 15:00:18.898599 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898604 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898609 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898615 | orchestrator | 2025-08-29 15:00:18.898620 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:18.898626 | orchestrator | Friday 29 August 2025 14:57:21 +0000 (0:00:00.889) 0:09:45.307 ********* 2025-08-29 15:00:18.898631 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898637 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898642 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898651 | orchestrator | 2025-08-29 15:00:18.898660 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:00:18.898666 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:00.303) 0:09:45.610 ********* 2025-08-29 15:00:18.898671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.898677 | orchestrator | 2025-08-29 15:00:18.898682 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:00:18.898688 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:00.476) 0:09:46.086 ********* 2025-08-29 15:00:18.898693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.898698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.898704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.898709 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898714 | orchestrator | 2025-08-29 15:00:18.898720 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:00:18.898725 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:00.750) 0:09:46.836 ********* 2025-08-29 15:00:18.898731 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898736 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898741 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898747 | orchestrator | 2025-08-29 15:00:18.898752 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:00:18.898758 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:00.332) 0:09:47.169 ********* 2025-08-29 15:00:18.898763 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898769 | orchestrator | 2025-08-29 15:00:18.898774 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:00:18.898779 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:00.204) 0:09:47.373 ********* 2025-08-29 15:00:18.898785 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898790 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.898796 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.898801 | orchestrator | 2025-08-29 15:00:18.898806 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:00:18.898812 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.324) 0:09:47.697 ********* 2025-08-29 15:00:18.898817 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898822 | orchestrator | 2025-08-29 15:00:18.898828 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:00:18.898833 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.211) 0:09:47.909 ********* 2025-08-29 15:00:18.898867 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898874 | orchestrator | 2025-08-29 15:00:18.898880 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:00:18.898885 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.189) 0:09:48.098 ********* 2025-08-29 15:00:18.898890 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898896 | orchestrator | 2025-08-29 15:00:18.898901 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:00:18.898907 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.106) 0:09:48.204 ********* 2025-08-29 15:00:18.898912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898918 | orchestrator | 2025-08-29 15:00:18.898923 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:00:18.898929 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.201) 0:09:48.405 ********* 2025-08-29 15:00:18.898934 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898940 | orchestrator | 2025-08-29 15:00:18.898945 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:00:18.898950 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.647) 0:09:49.053 ********* 2025-08-29 15:00:18.898956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.898966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.898972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.898977 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.898983 | orchestrator | 2025-08-29 15:00:18.898988 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:00:18.898994 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.416) 0:09:49.469 ********* 2025-08-29 15:00:18.898999 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899010 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899015 | orchestrator | 2025-08-29 15:00:18.899020 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:00:18.899026 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:00.294) 0:09:49.764 ********* 2025-08-29 15:00:18.899035 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899040 | orchestrator | 2025-08-29 15:00:18.899046 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:00:18.899051 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:00.200) 0:09:49.964 ********* 2025-08-29 15:00:18.899057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899062 | orchestrator | 2025-08-29 15:00:18.899068 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 15:00:18.899073 | orchestrator | 2025-08-29 15:00:18.899079 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.899084 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:00.596) 0:09:50.561 ********* 2025-08-29 15:00:18.899090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.899097 | orchestrator | 2025-08-29 15:00:18.899102 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.899108 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:01.162) 0:09:51.723 ********* 2025-08-29 15:00:18.899117 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.899123 | orchestrator | 2025-08-29 15:00:18.899128 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.899133 | orchestrator | Friday 29 August 2025 14:57:29 +0000 (0:00:01.109) 0:09:52.833 ********* 2025-08-29 15:00:18.899139 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899144 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899150 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899155 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899160 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899166 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899171 | orchestrator | 2025-08-29 15:00:18.899176 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.899182 | orchestrator | Friday 29 August 2025 14:57:30 +0000 (0:00:00.871) 0:09:53.705 ********* 2025-08-29 15:00:18.899187 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899193 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899198 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899203 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899209 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899214 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899220 | orchestrator | 2025-08-29 15:00:18.899225 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.899230 | orchestrator | Friday 29 August 2025 14:57:31 +0000 (0:00:00.945) 0:09:54.650 ********* 2025-08-29 15:00:18.899235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899253 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899258 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899263 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899267 | orchestrator | 2025-08-29 15:00:18.899272 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.899277 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:01.110) 0:09:55.761 ********* 2025-08-29 15:00:18.899282 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899287 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899291 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899296 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899301 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899306 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899310 | orchestrator | 2025-08-29 15:00:18.899315 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.899320 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:01.043) 0:09:56.805 ********* 2025-08-29 15:00:18.899325 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899329 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899334 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899339 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899344 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899349 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899353 | orchestrator | 2025-08-29 15:00:18.899358 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.899363 | orchestrator | Friday 29 August 2025 14:57:34 +0000 (0:00:01.143) 0:09:57.948 ********* 2025-08-29 15:00:18.899368 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899373 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899378 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899383 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899387 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899392 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899397 | orchestrator | 2025-08-29 15:00:18.899402 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.899407 | orchestrator | Friday 29 August 2025 14:57:34 +0000 (0:00:00.585) 0:09:58.533 ********* 2025-08-29 15:00:18.899411 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899430 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899435 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899440 | orchestrator | 2025-08-29 15:00:18.899445 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.899449 | orchestrator | Friday 29 August 2025 14:57:36 +0000 (0:00:01.336) 0:09:59.869 ********* 2025-08-29 15:00:18.899454 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899459 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899464 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899469 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899474 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899478 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899483 | orchestrator | 2025-08-29 15:00:18.899488 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.899495 | orchestrator | Friday 29 August 2025 14:57:37 +0000 (0:00:01.051) 0:10:00.921 ********* 2025-08-29 15:00:18.899500 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899505 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899510 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899515 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899520 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899524 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899529 | orchestrator | 2025-08-29 15:00:18.899537 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.899542 | orchestrator | Friday 29 August 2025 14:57:38 +0000 (0:00:01.410) 0:10:02.332 ********* 2025-08-29 15:00:18.899547 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899557 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899561 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899566 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899571 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899576 | orchestrator | 2025-08-29 15:00:18.899581 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.899585 | orchestrator | Friday 29 August 2025 14:57:39 +0000 (0:00:00.647) 0:10:02.979 ********* 2025-08-29 15:00:18.899590 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899598 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899603 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899613 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899622 | orchestrator | 2025-08-29 15:00:18.899627 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.899632 | orchestrator | Friday 29 August 2025 14:57:40 +0000 (0:00:00.904) 0:10:03.884 ********* 2025-08-29 15:00:18.899637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899642 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899646 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899651 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899656 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899661 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899666 | orchestrator | 2025-08-29 15:00:18.899671 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.899676 | orchestrator | Friday 29 August 2025 14:57:40 +0000 (0:00:00.679) 0:10:04.564 ********* 2025-08-29 15:00:18.899680 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899685 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899690 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899695 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899700 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899705 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899709 | orchestrator | 2025-08-29 15:00:18.899714 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.899719 | orchestrator | Friday 29 August 2025 14:57:41 +0000 (0:00:00.982) 0:10:05.546 ********* 2025-08-29 15:00:18.899724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899729 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899734 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899738 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899743 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899748 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899753 | orchestrator | 2025-08-29 15:00:18.899758 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.899762 | orchestrator | Friday 29 August 2025 14:57:42 +0000 (0:00:00.651) 0:10:06.198 ********* 2025-08-29 15:00:18.899767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899772 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899777 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899781 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899791 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899796 | orchestrator | 2025-08-29 15:00:18.899801 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.899805 | orchestrator | Friday 29 August 2025 14:57:43 +0000 (0:00:00.979) 0:10:07.178 ********* 2025-08-29 15:00:18.899810 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:18.899819 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:18.899824 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:18.899829 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899833 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899849 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899854 | orchestrator | 2025-08-29 15:00:18.899859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.899864 | orchestrator | Friday 29 August 2025 14:57:44 +0000 (0:00:00.662) 0:10:07.841 ********* 2025-08-29 15:00:18.899869 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899874 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899878 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899883 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.899888 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.899893 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.899897 | orchestrator | 2025-08-29 15:00:18.899903 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.899907 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.797) 0:10:08.638 ********* 2025-08-29 15:00:18.899912 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899917 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899922 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899927 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899932 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899936 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899941 | orchestrator | 2025-08-29 15:00:18.899946 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.899951 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.625) 0:10:09.263 ********* 2025-08-29 15:00:18.899956 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.899961 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.899965 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.899970 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.899975 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.899979 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.899984 | orchestrator | 2025-08-29 15:00:18.899989 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 15:00:18.899997 | orchestrator | Friday 29 August 2025 14:57:46 +0000 (0:00:01.106) 0:10:10.370 ********* 2025-08-29 15:00:18.900002 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.900007 | orchestrator | 2025-08-29 15:00:18.900011 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 15:00:18.900016 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:04.203) 0:10:14.574 ********* 2025-08-29 15:00:18.900021 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.900026 | orchestrator | 2025-08-29 15:00:18.900031 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 15:00:18.900036 | orchestrator | Friday 29 August 2025 14:57:53 +0000 (0:00:02.390) 0:10:16.964 ********* 2025-08-29 15:00:18.900041 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.900045 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.900050 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.900055 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.900060 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.900065 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.900070 | orchestrator | 2025-08-29 15:00:18.900074 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 15:00:18.900079 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:01.449) 0:10:18.413 ********* 2025-08-29 15:00:18.900087 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.900092 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.900097 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.900102 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.900107 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.900111 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.900123 | orchestrator | 2025-08-29 15:00:18.900128 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 15:00:18.900133 | orchestrator | Friday 29 August 2025 14:57:55 +0000 (0:00:01.160) 0:10:19.573 ********* 2025-08-29 15:00:18.900138 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.900144 | orchestrator | 2025-08-29 15:00:18.900149 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 15:00:18.900153 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:01.089) 0:10:20.663 ********* 2025-08-29 15:00:18.900158 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.900163 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.900168 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.900173 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.900178 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.900182 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.900187 | orchestrator | 2025-08-29 15:00:18.900192 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 15:00:18.900197 | orchestrator | Friday 29 August 2025 14:57:58 +0000 (0:00:01.457) 0:10:22.120 ********* 2025-08-29 15:00:18.900202 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.900207 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.900211 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.900216 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.900221 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.900226 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.900230 | orchestrator | 2025-08-29 15:00:18.900235 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 15:00:18.900240 | orchestrator | Friday 29 August 2025 14:58:01 +0000 (0:00:03.069) 0:10:25.190 ********* 2025-08-29 15:00:18.900245 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.900250 | orchestrator | 2025-08-29 15:00:18.900255 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 15:00:18.900259 | orchestrator | Friday 29 August 2025 14:58:02 +0000 (0:00:01.167) 0:10:26.357 ********* 2025-08-29 15:00:18.900264 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.900269 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.900274 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.900279 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900284 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900288 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900293 | orchestrator | 2025-08-29 15:00:18.900298 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 15:00:18.900303 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.829) 0:10:27.187 ********* 2025-08-29 15:00:18.900308 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:18.900312 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:18.900317 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:18.900322 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.900327 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.900332 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.900336 | orchestrator | 2025-08-29 15:00:18.900341 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 15:00:18.900346 | orchestrator | Friday 29 August 2025 14:58:06 +0000 (0:00:03.026) 0:10:30.214 ********* 2025-08-29 15:00:18.900351 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:18.900356 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:18.900361 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:18.900365 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900370 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900375 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900386 | orchestrator | 2025-08-29 15:00:18.900391 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 15:00:18.900396 | orchestrator | 2025-08-29 15:00:18.900400 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.900405 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:01.174) 0:10:31.389 ********* 2025-08-29 15:00:18.900410 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.900415 | orchestrator | 2025-08-29 15:00:18.900420 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.900428 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.813) 0:10:32.203 ********* 2025-08-29 15:00:18.900433 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.900438 | orchestrator | 2025-08-29 15:00:18.900442 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.900447 | orchestrator | Friday 29 August 2025 14:58:09 +0000 (0:00:00.539) 0:10:32.742 ********* 2025-08-29 15:00:18.900452 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900457 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900462 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900467 | orchestrator | 2025-08-29 15:00:18.900471 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.900476 | orchestrator | Friday 29 August 2025 14:58:09 +0000 (0:00:00.342) 0:10:33.084 ********* 2025-08-29 15:00:18.900481 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900486 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900491 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900495 | orchestrator | 2025-08-29 15:00:18.900500 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.900508 | orchestrator | Friday 29 August 2025 14:58:10 +0000 (0:00:01.099) 0:10:34.184 ********* 2025-08-29 15:00:18.900513 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900518 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900523 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900528 | orchestrator | 2025-08-29 15:00:18.900533 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.900538 | orchestrator | Friday 29 August 2025 14:58:11 +0000 (0:00:00.815) 0:10:34.999 ********* 2025-08-29 15:00:18.900542 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900547 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900552 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900557 | orchestrator | 2025-08-29 15:00:18.900561 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.900566 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:00.743) 0:10:35.743 ********* 2025-08-29 15:00:18.900571 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900576 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900581 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900585 | orchestrator | 2025-08-29 15:00:18.900590 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.900595 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:00.365) 0:10:36.110 ********* 2025-08-29 15:00:18.900600 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900609 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900614 | orchestrator | 2025-08-29 15:00:18.900619 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.900624 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:00.616) 0:10:36.726 ********* 2025-08-29 15:00:18.900629 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900638 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900643 | orchestrator | 2025-08-29 15:00:18.900652 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.900657 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:00.361) 0:10:37.087 ********* 2025-08-29 15:00:18.900662 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900666 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900671 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900676 | orchestrator | 2025-08-29 15:00:18.900681 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.900685 | orchestrator | Friday 29 August 2025 14:58:14 +0000 (0:00:00.768) 0:10:37.856 ********* 2025-08-29 15:00:18.900690 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900695 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900700 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900705 | orchestrator | 2025-08-29 15:00:18.900710 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.900714 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:00.802) 0:10:38.659 ********* 2025-08-29 15:00:18.900719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900729 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900734 | orchestrator | 2025-08-29 15:00:18.900739 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.900743 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:00.679) 0:10:39.338 ********* 2025-08-29 15:00:18.900748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900753 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900758 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900763 | orchestrator | 2025-08-29 15:00:18.900768 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.900773 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:00.322) 0:10:39.661 ********* 2025-08-29 15:00:18.900777 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900782 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900787 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900792 | orchestrator | 2025-08-29 15:00:18.900797 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.900801 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:00.454) 0:10:40.116 ********* 2025-08-29 15:00:18.900806 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900811 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900816 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900821 | orchestrator | 2025-08-29 15:00:18.900825 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.900830 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:00.394) 0:10:40.511 ********* 2025-08-29 15:00:18.900835 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900851 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900856 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900860 | orchestrator | 2025-08-29 15:00:18.900865 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.900870 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:00.723) 0:10:41.235 ********* 2025-08-29 15:00:18.900878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900883 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900887 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900892 | orchestrator | 2025-08-29 15:00:18.900897 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.900902 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:00.328) 0:10:41.563 ********* 2025-08-29 15:00:18.900907 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900912 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900917 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900921 | orchestrator | 2025-08-29 15:00:18.900926 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.900935 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:00.331) 0:10:41.895 ********* 2025-08-29 15:00:18.900940 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.900945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.900949 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.900954 | orchestrator | 2025-08-29 15:00:18.900959 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.900964 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:00.390) 0:10:42.285 ********* 2025-08-29 15:00:18.900969 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.900976 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.900981 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.900986 | orchestrator | 2025-08-29 15:00:18.900991 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.900996 | orchestrator | Friday 29 August 2025 14:58:19 +0000 (0:00:00.832) 0:10:43.118 ********* 2025-08-29 15:00:18.901001 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901005 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901010 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901015 | orchestrator | 2025-08-29 15:00:18.901020 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 15:00:18.901025 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:00.852) 0:10:43.970 ********* 2025-08-29 15:00:18.901030 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.901035 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.901039 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 15:00:18.901044 | orchestrator | 2025-08-29 15:00:18.901049 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 15:00:18.901054 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:01.049) 0:10:45.019 ********* 2025-08-29 15:00:18.901058 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.901063 | orchestrator | 2025-08-29 15:00:18.901068 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 15:00:18.901073 | orchestrator | Friday 29 August 2025 14:58:23 +0000 (0:00:02.171) 0:10:47.190 ********* 2025-08-29 15:00:18.901079 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 15:00:18.901085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901090 | orchestrator | 2025-08-29 15:00:18.901095 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 15:00:18.901100 | orchestrator | Friday 29 August 2025 14:58:23 +0000 (0:00:00.318) 0:10:47.509 ********* 2025-08-29 15:00:18.901106 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:00:18.901116 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:00:18.901121 | orchestrator | 2025-08-29 15:00:18.901126 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 15:00:18.901131 | orchestrator | Friday 29 August 2025 14:58:32 +0000 (0:00:08.825) 0:10:56.335 ********* 2025-08-29 15:00:18.901135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:18.901140 | orchestrator | 2025-08-29 15:00:18.901145 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 15:00:18.901150 | orchestrator | Friday 29 August 2025 14:58:36 +0000 (0:00:03.812) 0:11:00.148 ********* 2025-08-29 15:00:18.901154 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.901162 | orchestrator | 2025-08-29 15:00:18.901167 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 15:00:18.901172 | orchestrator | Friday 29 August 2025 14:58:37 +0000 (0:00:00.626) 0:11:00.774 ********* 2025-08-29 15:00:18.901177 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:00:18.901181 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 15:00:18.901186 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:00:18.901191 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:00:18.901196 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 15:00:18.901201 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 15:00:18.901205 | orchestrator | 2025-08-29 15:00:18.901213 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 15:00:18.901218 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:01.303) 0:11:02.077 ********* 2025-08-29 15:00:18.901223 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.901228 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:18.901233 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:18.901237 | orchestrator | 2025-08-29 15:00:18.901242 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:18.901247 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:02.176) 0:11:04.253 ********* 2025-08-29 15:00:18.901252 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:18.901257 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:18.901262 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901266 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:18.901271 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:00:18.901276 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901281 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:18.901286 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:00:18.901293 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901298 | orchestrator | 2025-08-29 15:00:18.901303 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 15:00:18.901308 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:01.159) 0:11:05.412 ********* 2025-08-29 15:00:18.901312 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901317 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901322 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901327 | orchestrator | 2025-08-29 15:00:18.901332 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 15:00:18.901336 | orchestrator | Friday 29 August 2025 14:58:44 +0000 (0:00:02.790) 0:11:08.203 ********* 2025-08-29 15:00:18.901341 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901346 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.901351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.901356 | orchestrator | 2025-08-29 15:00:18.901360 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 15:00:18.901365 | orchestrator | Friday 29 August 2025 14:58:45 +0000 (0:00:00.388) 0:11:08.592 ********* 2025-08-29 15:00:18.901370 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.901375 | orchestrator | 2025-08-29 15:00:18.901380 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:18.901385 | orchestrator | Friday 29 August 2025 14:58:46 +0000 (0:00:01.458) 0:11:10.051 ********* 2025-08-29 15:00:18.901390 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.901398 | orchestrator | 2025-08-29 15:00:18.901403 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 15:00:18.901408 | orchestrator | Friday 29 August 2025 14:58:47 +0000 (0:00:00.617) 0:11:10.668 ********* 2025-08-29 15:00:18.901413 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901418 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901422 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901427 | orchestrator | 2025-08-29 15:00:18.901432 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 15:00:18.901437 | orchestrator | Friday 29 August 2025 14:58:48 +0000 (0:00:01.662) 0:11:12.331 ********* 2025-08-29 15:00:18.901442 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901446 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901451 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901456 | orchestrator | 2025-08-29 15:00:18.901461 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 15:00:18.901466 | orchestrator | Friday 29 August 2025 14:58:49 +0000 (0:00:01.225) 0:11:13.556 ********* 2025-08-29 15:00:18.901470 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901475 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901480 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901485 | orchestrator | 2025-08-29 15:00:18.901490 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 15:00:18.901495 | orchestrator | Friday 29 August 2025 14:58:51 +0000 (0:00:02.011) 0:11:15.568 ********* 2025-08-29 15:00:18.901499 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901504 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901509 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901514 | orchestrator | 2025-08-29 15:00:18.901519 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 15:00:18.901523 | orchestrator | Friday 29 August 2025 14:58:53 +0000 (0:00:01.926) 0:11:17.495 ********* 2025-08-29 15:00:18.901528 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901533 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901538 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901543 | orchestrator | 2025-08-29 15:00:18.901548 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:18.901552 | orchestrator | Friday 29 August 2025 14:58:55 +0000 (0:00:01.971) 0:11:19.466 ********* 2025-08-29 15:00:18.901557 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901562 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901567 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901572 | orchestrator | 2025-08-29 15:00:18.901576 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:00:18.901581 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:00.672) 0:11:20.138 ********* 2025-08-29 15:00:18.901586 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.901591 | orchestrator | 2025-08-29 15:00:18.901596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:00:18.901600 | orchestrator | Friday 29 August 2025 14:58:57 +0000 (0:00:00.943) 0:11:21.082 ********* 2025-08-29 15:00:18.901608 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901613 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901618 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901623 | orchestrator | 2025-08-29 15:00:18.901628 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:00:18.901633 | orchestrator | Friday 29 August 2025 14:58:57 +0000 (0:00:00.359) 0:11:21.441 ********* 2025-08-29 15:00:18.901638 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.901643 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.901647 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.901652 | orchestrator | 2025-08-29 15:00:18.901661 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:00:18.901666 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:01.334) 0:11:22.776 ********* 2025-08-29 15:00:18.901671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.901676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.901681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.901686 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901690 | orchestrator | 2025-08-29 15:00:18.901695 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:00:18.901703 | orchestrator | Friday 29 August 2025 14:59:00 +0000 (0:00:01.721) 0:11:24.498 ********* 2025-08-29 15:00:18.901708 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901712 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901717 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901722 | orchestrator | 2025-08-29 15:00:18.901727 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:00:18.901732 | orchestrator | 2025-08-29 15:00:18.901737 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:18.901741 | orchestrator | Friday 29 August 2025 14:59:01 +0000 (0:00:00.718) 0:11:25.216 ********* 2025-08-29 15:00:18.901746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.901751 | orchestrator | 2025-08-29 15:00:18.901756 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:18.901761 | orchestrator | Friday 29 August 2025 14:59:02 +0000 (0:00:00.863) 0:11:26.079 ********* 2025-08-29 15:00:18.901765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.901770 | orchestrator | 2025-08-29 15:00:18.901775 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:18.901780 | orchestrator | Friday 29 August 2025 14:59:03 +0000 (0:00:00.601) 0:11:26.681 ********* 2025-08-29 15:00:18.901785 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901790 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.901794 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.901799 | orchestrator | 2025-08-29 15:00:18.901804 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:18.901809 | orchestrator | Friday 29 August 2025 14:59:03 +0000 (0:00:00.332) 0:11:27.014 ********* 2025-08-29 15:00:18.901814 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901819 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901823 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901828 | orchestrator | 2025-08-29 15:00:18.901833 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:18.901849 | orchestrator | Friday 29 August 2025 14:59:04 +0000 (0:00:01.151) 0:11:28.166 ********* 2025-08-29 15:00:18.901854 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901859 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901864 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901868 | orchestrator | 2025-08-29 15:00:18.901873 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:18.901878 | orchestrator | Friday 29 August 2025 14:59:05 +0000 (0:00:00.788) 0:11:28.954 ********* 2025-08-29 15:00:18.901883 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.901888 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.901893 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.901897 | orchestrator | 2025-08-29 15:00:18.901902 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:18.901907 | orchestrator | Friday 29 August 2025 14:59:06 +0000 (0:00:00.783) 0:11:29.738 ********* 2025-08-29 15:00:18.901912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901917 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.901926 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.901930 | orchestrator | 2025-08-29 15:00:18.901935 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:18.901940 | orchestrator | Friday 29 August 2025 14:59:06 +0000 (0:00:00.353) 0:11:30.091 ********* 2025-08-29 15:00:18.901945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.901954 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.901959 | orchestrator | 2025-08-29 15:00:18.901964 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:18.901969 | orchestrator | Friday 29 August 2025 14:59:07 +0000 (0:00:00.739) 0:11:30.830 ********* 2025-08-29 15:00:18.901974 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.901979 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.901983 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.901988 | orchestrator | 2025-08-29 15:00:18.901993 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:18.901998 | orchestrator | Friday 29 August 2025 14:59:07 +0000 (0:00:00.369) 0:11:31.200 ********* 2025-08-29 15:00:18.902003 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902007 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902012 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902051 | orchestrator | 2025-08-29 15:00:18.902056 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:18.902062 | orchestrator | Friday 29 August 2025 14:59:08 +0000 (0:00:00.793) 0:11:31.994 ********* 2025-08-29 15:00:18.902067 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902071 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902076 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902081 | orchestrator | 2025-08-29 15:00:18.902089 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:18.902094 | orchestrator | Friday 29 August 2025 14:59:09 +0000 (0:00:00.768) 0:11:32.762 ********* 2025-08-29 15:00:18.902099 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902103 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902108 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902113 | orchestrator | 2025-08-29 15:00:18.902118 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:18.902122 | orchestrator | Friday 29 August 2025 14:59:09 +0000 (0:00:00.663) 0:11:33.426 ********* 2025-08-29 15:00:18.902127 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902132 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902137 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902142 | orchestrator | 2025-08-29 15:00:18.902146 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:18.902151 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:00.349) 0:11:33.776 ********* 2025-08-29 15:00:18.902156 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902161 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902166 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902170 | orchestrator | 2025-08-29 15:00:18.902178 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:18.902183 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:00.374) 0:11:34.150 ********* 2025-08-29 15:00:18.902188 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902193 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902197 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902202 | orchestrator | 2025-08-29 15:00:18.902207 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:18.902212 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:00.355) 0:11:34.505 ********* 2025-08-29 15:00:18.902216 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902221 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902226 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902230 | orchestrator | 2025-08-29 15:00:18.902235 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:18.902245 | orchestrator | Friday 29 August 2025 14:59:11 +0000 (0:00:00.691) 0:11:35.196 ********* 2025-08-29 15:00:18.902250 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902260 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902264 | orchestrator | 2025-08-29 15:00:18.902269 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:18.902274 | orchestrator | Friday 29 August 2025 14:59:11 +0000 (0:00:00.338) 0:11:35.535 ********* 2025-08-29 15:00:18.902279 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902284 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902288 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902293 | orchestrator | 2025-08-29 15:00:18.902298 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:18.902303 | orchestrator | Friday 29 August 2025 14:59:12 +0000 (0:00:00.358) 0:11:35.894 ********* 2025-08-29 15:00:18.902307 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902322 | orchestrator | 2025-08-29 15:00:18.902327 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:18.902331 | orchestrator | Friday 29 August 2025 14:59:12 +0000 (0:00:00.343) 0:11:36.237 ********* 2025-08-29 15:00:18.902336 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902341 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902346 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902351 | orchestrator | 2025-08-29 15:00:18.902355 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:18.902360 | orchestrator | Friday 29 August 2025 14:59:13 +0000 (0:00:00.661) 0:11:36.899 ********* 2025-08-29 15:00:18.902365 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.902370 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.902375 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.902379 | orchestrator | 2025-08-29 15:00:18.902384 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 15:00:18.902389 | orchestrator | Friday 29 August 2025 14:59:13 +0000 (0:00:00.585) 0:11:37.485 ********* 2025-08-29 15:00:18.902394 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.902399 | orchestrator | 2025-08-29 15:00:18.902404 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:00:18.902408 | orchestrator | Friday 29 August 2025 14:59:14 +0000 (0:00:00.876) 0:11:38.362 ********* 2025-08-29 15:00:18.902413 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902418 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:18.902423 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:18.902427 | orchestrator | 2025-08-29 15:00:18.902432 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:18.902437 | orchestrator | Friday 29 August 2025 14:59:16 +0000 (0:00:02.189) 0:11:40.551 ********* 2025-08-29 15:00:18.902442 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:18.902447 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:18.902451 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.902456 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:18.902461 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:00:18.902466 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.902470 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:18.902475 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:00:18.902480 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.902485 | orchestrator | 2025-08-29 15:00:18.902490 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 15:00:18.902498 | orchestrator | Friday 29 August 2025 14:59:18 +0000 (0:00:01.312) 0:11:41.864 ********* 2025-08-29 15:00:18.902506 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902511 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902516 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902521 | orchestrator | 2025-08-29 15:00:18.902526 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 15:00:18.902531 | orchestrator | Friday 29 August 2025 14:59:18 +0000 (0:00:00.327) 0:11:42.191 ********* 2025-08-29 15:00:18.902535 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.902540 | orchestrator | 2025-08-29 15:00:18.902545 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 15:00:18.902550 | orchestrator | Friday 29 August 2025 14:59:19 +0000 (0:00:01.002) 0:11:43.194 ********* 2025-08-29 15:00:18.902555 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.902563 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.902568 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.902573 | orchestrator | 2025-08-29 15:00:18.902578 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 15:00:18.902582 | orchestrator | Friday 29 August 2025 14:59:20 +0000 (0:00:00.989) 0:11:44.183 ********* 2025-08-29 15:00:18.902587 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902592 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:00:18.902597 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902602 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:00:18.902607 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902612 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:00:18.902617 | orchestrator | 2025-08-29 15:00:18.902621 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:00:18.902626 | orchestrator | Friday 29 August 2025 14:59:25 +0000 (0:00:04.839) 0:11:49.023 ********* 2025-08-29 15:00:18.902631 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902635 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:18.902640 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902645 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:18.902650 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:18.902655 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:18.902659 | orchestrator | 2025-08-29 15:00:18.902664 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:18.902669 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:02.759) 0:11:51.783 ********* 2025-08-29 15:00:18.902674 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:18.902678 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.902683 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:18.902688 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.902698 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:18.902703 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.902708 | orchestrator | 2025-08-29 15:00:18.902713 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 15:00:18.902718 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:01.636) 0:11:53.420 ********* 2025-08-29 15:00:18.902722 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 15:00:18.902727 | orchestrator | 2025-08-29 15:00:18.902732 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 15:00:18.902737 | orchestrator | Friday 29 August 2025 14:59:30 +0000 (0:00:00.259) 0:11:53.680 ********* 2025-08-29 15:00:18.902741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902771 | orchestrator | 2025-08-29 15:00:18.902778 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 15:00:18.902783 | orchestrator | Friday 29 August 2025 14:59:30 +0000 (0:00:00.636) 0:11:54.316 ********* 2025-08-29 15:00:18.902788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:18.902812 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902817 | orchestrator | 2025-08-29 15:00:18.902824 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 15:00:18.902830 | orchestrator | Friday 29 August 2025 14:59:31 +0000 (0:00:00.624) 0:11:54.940 ********* 2025-08-29 15:00:18.902834 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:18.902868 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:18.902874 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:18.902879 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:18.902884 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:18.902892 | orchestrator | 2025-08-29 15:00:18.902897 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 15:00:18.902902 | orchestrator | Friday 29 August 2025 15:00:02 +0000 (0:00:31.130) 0:12:26.071 ********* 2025-08-29 15:00:18.902906 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902916 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902921 | orchestrator | 2025-08-29 15:00:18.902926 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 15:00:18.902930 | orchestrator | Friday 29 August 2025 15:00:02 +0000 (0:00:00.361) 0:12:26.433 ********* 2025-08-29 15:00:18.902935 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.902940 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.902945 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.902949 | orchestrator | 2025-08-29 15:00:18.902954 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 15:00:18.902959 | orchestrator | Friday 29 August 2025 15:00:03 +0000 (0:00:00.331) 0:12:26.764 ********* 2025-08-29 15:00:18.902964 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.902969 | orchestrator | 2025-08-29 15:00:18.902973 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 15:00:18.902978 | orchestrator | Friday 29 August 2025 15:00:04 +0000 (0:00:00.977) 0:12:27.742 ********* 2025-08-29 15:00:18.902983 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.902988 | orchestrator | 2025-08-29 15:00:18.902992 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 15:00:18.902997 | orchestrator | Friday 29 August 2025 15:00:04 +0000 (0:00:00.618) 0:12:28.361 ********* 2025-08-29 15:00:18.903002 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.903007 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.903011 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.903016 | orchestrator | 2025-08-29 15:00:18.903021 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 15:00:18.903026 | orchestrator | Friday 29 August 2025 15:00:06 +0000 (0:00:01.763) 0:12:30.124 ********* 2025-08-29 15:00:18.903031 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.903035 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.903040 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.903045 | orchestrator | 2025-08-29 15:00:18.903050 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 15:00:18.903055 | orchestrator | Friday 29 August 2025 15:00:07 +0000 (0:00:01.283) 0:12:31.408 ********* 2025-08-29 15:00:18.903059 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:18.903064 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:18.903069 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:18.903074 | orchestrator | 2025-08-29 15:00:18.903078 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 15:00:18.903083 | orchestrator | Friday 29 August 2025 15:00:09 +0000 (0:00:01.740) 0:12:33.149 ********* 2025-08-29 15:00:18.903092 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.903097 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.903101 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:18.903106 | orchestrator | 2025-08-29 15:00:18.903111 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:18.903116 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:02.733) 0:12:35.882 ********* 2025-08-29 15:00:18.903121 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.903129 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.903134 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.903138 | orchestrator | 2025-08-29 15:00:18.903143 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:00:18.903148 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.364) 0:12:36.247 ********* 2025-08-29 15:00:18.903153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:18.903161 | orchestrator | 2025-08-29 15:00:18.903169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:00:18.903174 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.835) 0:12:37.083 ********* 2025-08-29 15:00:18.903179 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.903183 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.903188 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.903193 | orchestrator | 2025-08-29 15:00:18.903198 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:00:18.903202 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.384) 0:12:37.467 ********* 2025-08-29 15:00:18.903207 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.903212 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:18.903217 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:18.903221 | orchestrator | 2025-08-29 15:00:18.903226 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:00:18.903231 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:00.378) 0:12:37.846 ********* 2025-08-29 15:00:18.903236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:18.903240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:18.903245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:18.903250 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:18.903255 | orchestrator | 2025-08-29 15:00:18.903259 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:00:18.903264 | orchestrator | Friday 29 August 2025 15:00:15 +0000 (0:00:00.976) 0:12:38.822 ********* 2025-08-29 15:00:18.903269 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:18.903274 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:18.903279 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:18.903283 | orchestrator | 2025-08-29 15:00:18.903288 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:00:18.903293 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-08-29 15:00:18.903298 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 15:00:18.903303 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 15:00:18.903308 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-08-29 15:00:18.903313 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 15:00:18.903317 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 15:00:18.903322 | orchestrator | 2025-08-29 15:00:18.903327 | orchestrator | 2025-08-29 15:00:18.903332 | orchestrator | 2025-08-29 15:00:18.903337 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:00:18.903341 | orchestrator | Friday 29 August 2025 15:00:15 +0000 (0:00:00.290) 0:12:39.113 ********* 2025-08-29 15:00:18.903346 | orchestrator | =============================================================================== 2025-08-29 15:00:18.903355 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 116.68s 2025-08-29 15:00:18.903360 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.18s 2025-08-29 15:00:18.903365 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.13s 2025-08-29 15:00:18.903369 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.05s 2025-08-29 15:00:18.903374 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.09s 2025-08-29 15:00:18.903379 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.88s 2025-08-29 15:00:18.903384 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.57s 2025-08-29 15:00:18.903389 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.51s 2025-08-29 15:00:18.903394 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.35s 2025-08-29 15:00:18.903401 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.83s 2025-08-29 15:00:18.903405 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.56s 2025-08-29 15:00:18.903410 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.33s 2025-08-29 15:00:18.903414 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.16s 2025-08-29 15:00:18.903419 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.84s 2025-08-29 15:00:18.903423 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.20s 2025-08-29 15:00:18.903428 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.90s 2025-08-29 15:00:18.903432 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.81s 2025-08-29 15:00:18.903437 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.51s 2025-08-29 15:00:18.903442 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.38s 2025-08-29 15:00:18.903446 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.31s 2025-08-29 15:00:18.903453 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:18.903458 | orchestrator | 2025-08-29 15:00:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:21.922548 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:21.923500 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:21.925213 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:21.925721 | orchestrator | 2025-08-29 15:00:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:24.970640 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:24.974114 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:24.977691 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:24.978200 | orchestrator | 2025-08-29 15:00:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:28.037667 | orchestrator | 2025-08-29 15:00:28 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:28.039786 | orchestrator | 2025-08-29 15:00:28 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:28.041183 | orchestrator | 2025-08-29 15:00:28 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:28.041432 | orchestrator | 2025-08-29 15:00:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:31.083475 | orchestrator | 2025-08-29 15:00:31 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:31.084570 | orchestrator | 2025-08-29 15:00:31 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:31.088787 | orchestrator | 2025-08-29 15:00:31 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:31.088914 | orchestrator | 2025-08-29 15:00:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:34.132697 | orchestrator | 2025-08-29 15:00:34 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:34.135304 | orchestrator | 2025-08-29 15:00:34 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:34.135956 | orchestrator | 2025-08-29 15:00:34 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:34.136314 | orchestrator | 2025-08-29 15:00:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:37.180526 | orchestrator | 2025-08-29 15:00:37 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:37.183545 | orchestrator | 2025-08-29 15:00:37 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:37.185915 | orchestrator | 2025-08-29 15:00:37 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:37.186260 | orchestrator | 2025-08-29 15:00:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:40.233415 | orchestrator | 2025-08-29 15:00:40 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:40.233747 | orchestrator | 2025-08-29 15:00:40 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:40.234354 | orchestrator | 2025-08-29 15:00:40 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:40.234386 | orchestrator | 2025-08-29 15:00:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:43.290640 | orchestrator | 2025-08-29 15:00:43 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:43.290758 | orchestrator | 2025-08-29 15:00:43 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:43.291251 | orchestrator | 2025-08-29 15:00:43 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:43.291291 | orchestrator | 2025-08-29 15:00:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:46.348039 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:46.348162 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:46.349530 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:46.349556 | orchestrator | 2025-08-29 15:00:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:49.397129 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:49.399439 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:49.401173 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:49.401224 | orchestrator | 2025-08-29 15:00:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:52.462871 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:52.467760 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:52.469739 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:52.470585 | orchestrator | 2025-08-29 15:00:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:55.516000 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:55.518635 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:55.519389 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:55.519439 | orchestrator | 2025-08-29 15:00:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:58.580039 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:00:58.582551 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:00:58.586902 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:00:58.586969 | orchestrator | 2025-08-29 15:00:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:01.634228 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:01:01.635119 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:01.635616 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:01.635638 | orchestrator | 2025-08-29 15:01:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:04.682363 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:01:04.683927 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:04.685611 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:04.685685 | orchestrator | 2025-08-29 15:01:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:07.733168 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:01:07.734919 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:07.736479 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:07.736518 | orchestrator | 2025-08-29 15:01:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:10.785317 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state STARTED 2025-08-29 15:01:10.788553 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:10.791104 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:10.791156 | orchestrator | 2025-08-29 15:01:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:13.839309 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task d387e76b-498f-4841-b00e-175abb7aa1af is in state SUCCESS 2025-08-29 15:01:13.841849 | orchestrator | 2025-08-29 15:01:13.841921 | orchestrator | 2025-08-29 15:01:13.841933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:01:13.841943 | orchestrator | 2025-08-29 15:01:13.841951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:01:13.841960 | orchestrator | Friday 29 August 2025 14:58:02 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-08-29 15:01:13.841968 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:13.841977 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:13.841985 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:13.841993 | orchestrator | 2025-08-29 15:01:13.842001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:01:13.842009 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.287) 0:00:00.537 ********* 2025-08-29 15:01:13.842061 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 15:01:13.842071 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 15:01:13.842079 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 15:01:13.842087 | orchestrator | 2025-08-29 15:01:13.842096 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 15:01:13.842230 | orchestrator | 2025-08-29 15:01:13.842246 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:01:13.842254 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.434) 0:00:00.971 ********* 2025-08-29 15:01:13.842262 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:13.842270 | orchestrator | 2025-08-29 15:01:13.842278 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 15:01:13.842286 | orchestrator | Friday 29 August 2025 14:58:04 +0000 (0:00:00.472) 0:00:01.444 ********* 2025-08-29 15:01:13.842294 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:01:13.842302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:01:13.842310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:01:13.842318 | orchestrator | 2025-08-29 15:01:13.842338 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 15:01:13.842347 | orchestrator | Friday 29 August 2025 14:58:04 +0000 (0:00:00.679) 0:00:02.123 ********* 2025-08-29 15:01:13.842358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.842385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.842427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.842439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.842450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.842460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.842474 | orchestrator | 2025-08-29 15:01:13.842487 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:01:13.842495 | orchestrator | Friday 29 August 2025 14:58:06 +0000 (0:00:01.635) 0:00:03.759 ********* 2025-08-29 15:01:13.842503 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:13.842512 | orchestrator | 2025-08-29 15:01:13.842519 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 15:01:13.842527 | orchestrator | Friday 29 August 2025 14:58:06 +0000 (0:00:00.637) 0:00:04.396 ********* 2025-08-29 15:01:13.842545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.842554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.842563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.842571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.842597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.842607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.842616 | orchestrator | 2025-08-29 15:01:13.842624 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 15:01:13.842632 | orchestrator | Friday 29 August 2025 14:58:09 +0000 (0:00:02.942) 0:00:07.339 ********* 2025-08-29 15:01:13.842640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:01:13.842653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:01:13.842671 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:13.842685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:01:13.842694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:01:13.842702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:13.842711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:01:13.842720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:01:13.842733 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:13.842742 | orchestrator | 2025-08-29 15:01:13.842754 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 15:01:13.842762 | orchestrator | Friday 29 August 2025 14:58:11 +0000 (0:00:01.676) 0:00:09.016 ********* 2025-08-29 15:01:13.842799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:01:13.842808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:01:13.842817 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:13.842825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:01:13.842839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:01:13.842848 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:13.842875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:01:13.842885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:01:13.842895 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:13.842904 | orchestrator | 2025-08-29 15:01:13.842913 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 15:01:13.842922 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:01.031) 0:00:10.047 ********* 2025-08-29 15:01:13.842932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.843024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.843040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.843056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.843065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.843081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.843090 | orchestrator | 2025-08-29 15:01:13.843098 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 15:01:13.843106 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:02.806) 0:00:12.854 ********* 2025-08-29 15:01:13.843114 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:13.843122 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:13.843130 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:13.843138 | orchestrator | 2025-08-29 15:01:13.843150 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 15:01:13.843158 | orchestrator | Friday 29 August 2025 14:58:19 +0000 (0:00:04.428) 0:00:17.282 ********* 2025-08-29 15:01:13.843166 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:13.843174 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:13.843182 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:13.843190 | orchestrator | 2025-08-29 15:01:13.843198 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 15:01:13.843206 | orchestrator | Friday 29 August 2025 14:58:22 +0000 (0:00:02.215) 0:00:19.498 ********* 2025-08-29 15:01:13.843221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.843230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.843243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:01:13.843252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.843270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.843279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:01:13.843293 | orchestrator | 2025-08-29 15:01:13.843301 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:01:13.843309 | orchestrator | Friday 29 August 2025 14:58:24 +0000 (0:00:02.226) 0:00:21.725 ********* 2025-08-29 15:01:13.843317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:13.843325 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:13.843332 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:13.843340 | orchestrator | 2025-08-29 15:01:13.843348 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:01:13.843356 | orchestrator | Friday 29 August 2025 14:58:24 +0000 (0:00:00.343) 0:00:22.068 ********* 2025-08-29 15:01:13.843364 | orchestrator | 2025-08-29 15:01:13.843372 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:01:13.843380 | orchestrator | Friday 29 August 2025 14:58:24 +0000 (0:00:00.067) 0:00:22.135 ********* 2025-08-29 15:01:13.843388 | orchestrator | 2025-08-29 15:01:13.843395 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:01:13.843403 | orchestrator | Friday 29 August 2025 14:58:24 +0000 (0:00:00.068) 0:00:22.204 ********* 2025-08-29 15:01:13.843411 | orchestrator | 2025-08-29 15:01:13.843419 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 15:01:13.843427 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:00.305) 0:00:22.509 ********* 2025-08-29 15:01:13.843435 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:13.843442 | orchestrator | 2025-08-29 15:01:13.843450 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 15:01:13.843458 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:00.290) 0:00:22.799 ********* 2025-08-29 15:01:13.843466 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:13.843474 | orchestrator | 2025-08-29 15:01:13.843482 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 15:01:13.843490 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:00.204) 0:00:23.004 ********* 2025-08-29 15:01:13.843498 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:13.843506 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:13.843514 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:13.843522 | orchestrator | 2025-08-29 15:01:13.843530 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 15:01:13.843538 | orchestrator | Friday 29 August 2025 14:59:38 +0000 (0:01:13.108) 0:01:36.112 ********* 2025-08-29 15:01:13.843546 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:13.843553 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:13.843561 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:13.843569 | orchestrator | 2025-08-29 15:01:13.843577 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:01:13.843586 | orchestrator | Friday 29 August 2025 15:01:02 +0000 (0:01:23.845) 0:02:59.957 ********* 2025-08-29 15:01:13.843593 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:13.843601 | orchestrator | 2025-08-29 15:01:13.843613 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 15:01:13.843621 | orchestrator | Friday 29 August 2025 15:01:03 +0000 (0:00:00.823) 0:03:00.781 ********* 2025-08-29 15:01:13.843629 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:13.843637 | orchestrator | 2025-08-29 15:01:13.843645 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 15:01:13.843653 | orchestrator | Friday 29 August 2025 15:01:05 +0000 (0:00:02.514) 0:03:03.296 ********* 2025-08-29 15:01:13.843661 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:13.843669 | orchestrator | 2025-08-29 15:01:13.843677 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 15:01:13.843688 | orchestrator | Friday 29 August 2025 15:01:08 +0000 (0:00:02.342) 0:03:05.638 ********* 2025-08-29 15:01:13.843696 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:13.843704 | orchestrator | 2025-08-29 15:01:13.843712 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 15:01:13.843720 | orchestrator | Friday 29 August 2025 15:01:10 +0000 (0:00:02.734) 0:03:08.372 ********* 2025-08-29 15:01:13.843728 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:13.843736 | orchestrator | 2025-08-29 15:01:13.843747 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:01:13.843757 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:01:13.843767 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:01:13.843799 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:01:13.843807 | orchestrator | 2025-08-29 15:01:13.843815 | orchestrator | 2025-08-29 15:01:13.843823 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:01:13.843831 | orchestrator | Friday 29 August 2025 15:01:13 +0000 (0:00:02.390) 0:03:10.763 ********* 2025-08-29 15:01:13.843839 | orchestrator | =============================================================================== 2025-08-29 15:01:13.843847 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.85s 2025-08-29 15:01:13.843855 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.11s 2025-08-29 15:01:13.843862 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.43s 2025-08-29 15:01:13.843870 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.94s 2025-08-29 15:01:13.843879 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.81s 2025-08-29 15:01:13.843887 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.73s 2025-08-29 15:01:13.843895 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.51s 2025-08-29 15:01:13.843902 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2025-08-29 15:01:13.843910 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2025-08-29 15:01:13.843918 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.23s 2025-08-29 15:01:13.843926 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.22s 2025-08-29 15:01:13.843934 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.68s 2025-08-29 15:01:13.843942 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.64s 2025-08-29 15:01:13.843950 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.03s 2025-08-29 15:01:13.843958 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.82s 2025-08-29 15:01:13.843966 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-08-29 15:01:13.843973 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2025-08-29 15:01:13.843981 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-08-29 15:01:13.843989 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.44s 2025-08-29 15:01:13.843997 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-08-29 15:01:13.844005 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:13.845368 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:13.845399 | orchestrator | 2025-08-29 15:01:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:16.883820 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:16.885491 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:16.885511 | orchestrator | 2025-08-29 15:01:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:19.925059 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:19.926863 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:19.926895 | orchestrator | 2025-08-29 15:01:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:22.975701 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:22.977398 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:22.977453 | orchestrator | 2025-08-29 15:01:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:26.030484 | orchestrator | 2025-08-29 15:01:26 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state STARTED 2025-08-29 15:01:26.030624 | orchestrator | 2025-08-29 15:01:26 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:26.030653 | orchestrator | 2025-08-29 15:01:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:29.077345 | orchestrator | 2025-08-29 15:01:29.077457 | orchestrator | 2025-08-29 15:01:29.077467 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 15:01:29.077475 | orchestrator | 2025-08-29 15:01:29.077482 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 15:01:29.077487 | orchestrator | Friday 29 August 2025 14:58:02 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-08-29 15:01:29.077491 | orchestrator | ok: [localhost] => { 2025-08-29 15:01:29.077497 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 15:01:29.077502 | orchestrator | } 2025-08-29 15:01:29.077506 | orchestrator | 2025-08-29 15:01:29.077510 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 15:01:29.077514 | orchestrator | Friday 29 August 2025 14:58:02 +0000 (0:00:00.060) 0:00:00.149 ********* 2025-08-29 15:01:29.077519 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 15:01:29.077526 | orchestrator | ...ignoring 2025-08-29 15:01:29.077533 | orchestrator | 2025-08-29 15:01:29.077538 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 15:01:29.077544 | orchestrator | Friday 29 August 2025 14:58:05 +0000 (0:00:02.863) 0:00:03.013 ********* 2025-08-29 15:01:29.077550 | orchestrator | skipping: [localhost] 2025-08-29 15:01:29.077556 | orchestrator | 2025-08-29 15:01:29.077562 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 15:01:29.077569 | orchestrator | Friday 29 August 2025 14:58:05 +0000 (0:00:00.055) 0:00:03.068 ********* 2025-08-29 15:01:29.077576 | orchestrator | ok: [localhost] 2025-08-29 15:01:29.077582 | orchestrator | 2025-08-29 15:01:29.077588 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:01:29.077595 | orchestrator | 2025-08-29 15:01:29.077601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:01:29.077608 | orchestrator | Friday 29 August 2025 14:58:06 +0000 (0:00:00.148) 0:00:03.217 ********* 2025-08-29 15:01:29.077616 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.077648 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.077655 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.077658 | orchestrator | 2025-08-29 15:01:29.077662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:01:29.077666 | orchestrator | Friday 29 August 2025 14:58:06 +0000 (0:00:00.290) 0:00:03.508 ********* 2025-08-29 15:01:29.077670 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 15:01:29.077675 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 15:01:29.077679 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 15:01:29.077683 | orchestrator | 2025-08-29 15:01:29.077687 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 15:01:29.077691 | orchestrator | 2025-08-29 15:01:29.077896 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 15:01:29.077908 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:00.751) 0:00:04.260 ********* 2025-08-29 15:01:29.077912 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:29.077916 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:01:29.077920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:01:29.077924 | orchestrator | 2025-08-29 15:01:29.077928 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:29.077932 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:00.470) 0:00:04.730 ********* 2025-08-29 15:01:29.077936 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:29.077941 | orchestrator | 2025-08-29 15:01:29.077945 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 15:01:29.077949 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.544) 0:00:05.274 ********* 2025-08-29 15:01:29.077982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.077990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078007 | orchestrator | 2025-08-29 15:01:29.078049 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 15:01:29.078054 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:03.974) 0:00:09.249 ********* 2025-08-29 15:01:29.078058 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.078062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078066 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078070 | orchestrator | 2025-08-29 15:01:29.078073 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 15:01:29.078077 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:00.784) 0:00:10.033 ********* 2025-08-29 15:01:29.078085 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078089 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078093 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.078097 | orchestrator | 2025-08-29 15:01:29.078102 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 15:01:29.078108 | orchestrator | Friday 29 August 2025 14:58:14 +0000 (0:00:01.728) 0:00:11.762 ********* 2025-08-29 15:01:29.078115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078229 | orchestrator | 2025-08-29 15:01:29.078233 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 15:01:29.078236 | orchestrator | Friday 29 August 2025 14:58:19 +0000 (0:00:05.213) 0:00:16.975 ********* 2025-08-29 15:01:29.078240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078244 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078248 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.078252 | orchestrator | 2025-08-29 15:01:29.078255 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 15:01:29.078259 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:01.505) 0:00:18.480 ********* 2025-08-29 15:01:29.078263 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.078267 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:29.078270 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:29.078274 | orchestrator | 2025-08-29 15:01:29.078278 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:29.078292 | orchestrator | Friday 29 August 2025 14:58:26 +0000 (0:00:05.070) 0:00:23.551 ********* 2025-08-29 15:01:29.078333 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:29.078340 | orchestrator | 2025-08-29 15:01:29.078344 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 15:01:29.078348 | orchestrator | Friday 29 August 2025 14:58:27 +0000 (0:00:00.690) 0:00:24.241 ********* 2025-08-29 15:01:29.078362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078380 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078414 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078420 | orchestrator | 2025-08-29 15:01:29.078426 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 15:01:29.078432 | orchestrator | Friday 29 August 2025 14:58:30 +0000 (0:00:03.137) 0:00:27.378 ********* 2025-08-29 15:01:29.078439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078468 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078476 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078480 | orchestrator | 2025-08-29 15:01:29.078484 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 15:01:29.078487 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:03.215) 0:00:30.594 ********* 2025-08-29 15:01:29.078497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078510 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:29.078544 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078548 | orchestrator | 2025-08-29 15:01:29.078552 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 15:01:29.078558 | orchestrator | Friday 29 August 2025 14:58:36 +0000 (0:00:02.690) 0:00:33.285 ********* 2025-08-29 15:01:29.078568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:29.078602 | orchestrator | 2025-08-29 15:01:29.078608 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 15:01:29.078614 | orchestrator | Friday 29 August 2025 14:58:39 +0000 (0:00:03.451) 0:00:36.737 ********* 2025-08-29 15:01:29.078621 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.078627 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:29.078631 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:29.078635 | orchestrator | 2025-08-29 15:01:29.078639 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 15:01:29.078642 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:01.053) 0:00:37.790 ********* 2025-08-29 15:01:29.078646 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.078650 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.078654 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.078658 | orchestrator | 2025-08-29 15:01:29.078662 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 15:01:29.078666 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:00.381) 0:00:38.172 ********* 2025-08-29 15:01:29.078670 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.078673 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.078677 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.078681 | orchestrator | 2025-08-29 15:01:29.078685 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 15:01:29.078693 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:00.425) 0:00:38.597 ********* 2025-08-29 15:01:29.078698 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 15:01:29.078703 | orchestrator | ...ignoring 2025-08-29 15:01:29.078707 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 15:01:29.078711 | orchestrator | ...ignoring 2025-08-29 15:01:29.078717 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 15:01:29.078724 | orchestrator | ...ignoring 2025-08-29 15:01:29.078730 | orchestrator | 2025-08-29 15:01:29.078736 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 15:01:29.078742 | orchestrator | Friday 29 August 2025 14:58:52 +0000 (0:00:10.956) 0:00:49.553 ********* 2025-08-29 15:01:29.078748 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.078754 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.078776 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.078780 | orchestrator | 2025-08-29 15:01:29.078783 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 15:01:29.078787 | orchestrator | Friday 29 August 2025 14:58:53 +0000 (0:00:01.061) 0:00:50.615 ********* 2025-08-29 15:01:29.078791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078802 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078807 | orchestrator | 2025-08-29 15:01:29.078813 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 15:01:29.078819 | orchestrator | Friday 29 August 2025 14:58:53 +0000 (0:00:00.489) 0:00:51.104 ********* 2025-08-29 15:01:29.078825 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078832 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078838 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078844 | orchestrator | 2025-08-29 15:01:29.078851 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 15:01:29.078856 | orchestrator | Friday 29 August 2025 14:58:54 +0000 (0:00:00.646) 0:00:51.751 ********* 2025-08-29 15:01:29.078859 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078867 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078871 | orchestrator | 2025-08-29 15:01:29.078875 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 15:01:29.078881 | orchestrator | Friday 29 August 2025 14:58:55 +0000 (0:00:00.487) 0:00:52.239 ********* 2025-08-29 15:01:29.078885 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.078889 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.078893 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.078897 | orchestrator | 2025-08-29 15:01:29.078900 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 15:01:29.078904 | orchestrator | Friday 29 August 2025 14:58:55 +0000 (0:00:00.704) 0:00:52.943 ********* 2025-08-29 15:01:29.078908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078912 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078916 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078920 | orchestrator | 2025-08-29 15:01:29.078923 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:29.078927 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:00.471) 0:00:53.415 ********* 2025-08-29 15:01:29.078931 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078935 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.078939 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 15:01:29.078943 | orchestrator | 2025-08-29 15:01:29.078950 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 15:01:29.078953 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:00.461) 0:00:53.876 ********* 2025-08-29 15:01:29.078957 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.078961 | orchestrator | 2025-08-29 15:01:29.078965 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 15:01:29.078969 | orchestrator | Friday 29 August 2025 14:59:07 +0000 (0:00:10.708) 0:01:04.585 ********* 2025-08-29 15:01:29.078973 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.078976 | orchestrator | 2025-08-29 15:01:29.078980 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:29.078985 | orchestrator | Friday 29 August 2025 14:59:07 +0000 (0:00:00.134) 0:01:04.719 ********* 2025-08-29 15:01:29.078989 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.078994 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.078998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079002 | orchestrator | 2025-08-29 15:01:29.079007 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 15:01:29.079011 | orchestrator | Friday 29 August 2025 14:59:08 +0000 (0:00:01.168) 0:01:05.888 ********* 2025-08-29 15:01:29.079015 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079020 | orchestrator | 2025-08-29 15:01:29.079024 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 15:01:29.079028 | orchestrator | Friday 29 August 2025 14:59:17 +0000 (0:00:08.738) 0:01:14.626 ********* 2025-08-29 15:01:29.079033 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.079051 | orchestrator | 2025-08-29 15:01:29.079055 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 15:01:29.079060 | orchestrator | Friday 29 August 2025 14:59:19 +0000 (0:00:01.648) 0:01:16.275 ********* 2025-08-29 15:01:29.079064 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.079068 | orchestrator | 2025-08-29 15:01:29.079073 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 15:01:29.079077 | orchestrator | Friday 29 August 2025 14:59:22 +0000 (0:00:02.936) 0:01:19.212 ********* 2025-08-29 15:01:29.079081 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079086 | orchestrator | 2025-08-29 15:01:29.079090 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 15:01:29.079094 | orchestrator | Friday 29 August 2025 14:59:22 +0000 (0:00:00.139) 0:01:19.352 ********* 2025-08-29 15:01:29.079099 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.079103 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.079108 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079112 | orchestrator | 2025-08-29 15:01:29.079116 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 15:01:29.079121 | orchestrator | Friday 29 August 2025 14:59:22 +0000 (0:00:00.605) 0:01:19.957 ********* 2025-08-29 15:01:29.079125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.079130 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 15:01:29.079134 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:29.079138 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:29.079143 | orchestrator | 2025-08-29 15:01:29.079147 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 15:01:29.079151 | orchestrator | skipping: no hosts matched 2025-08-29 15:01:29.079156 | orchestrator | 2025-08-29 15:01:29.079160 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:01:29.079164 | orchestrator | 2025-08-29 15:01:29.079169 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:01:29.079173 | orchestrator | Friday 29 August 2025 14:59:23 +0000 (0:00:00.361) 0:01:20.318 ********* 2025-08-29 15:01:29.079177 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:29.079182 | orchestrator | 2025-08-29 15:01:29.079186 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:01:29.079197 | orchestrator | Friday 29 August 2025 14:59:43 +0000 (0:00:20.013) 0:01:40.332 ********* 2025-08-29 15:01:29.079201 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.079206 | orchestrator | 2025-08-29 15:01:29.079210 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:01:29.079215 | orchestrator | Friday 29 August 2025 15:00:03 +0000 (0:00:20.742) 0:02:01.075 ********* 2025-08-29 15:01:29.079219 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.079223 | orchestrator | 2025-08-29 15:01:29.079228 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:01:29.079233 | orchestrator | 2025-08-29 15:01:29.079237 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:01:29.079241 | orchestrator | Friday 29 August 2025 15:00:06 +0000 (0:00:03.075) 0:02:04.150 ********* 2025-08-29 15:01:29.079246 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:29.079250 | orchestrator | 2025-08-29 15:01:29.079255 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:01:29.079261 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:21.359) 0:02:25.510 ********* 2025-08-29 15:01:29.079266 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.079270 | orchestrator | 2025-08-29 15:01:29.079274 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:01:29.079279 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:20.629) 0:02:46.139 ********* 2025-08-29 15:01:29.079284 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.079288 | orchestrator | 2025-08-29 15:01:29.079292 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 15:01:29.079297 | orchestrator | 2025-08-29 15:01:29.079301 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:01:29.079305 | orchestrator | Friday 29 August 2025 15:00:52 +0000 (0:00:03.288) 0:02:49.427 ********* 2025-08-29 15:01:29.079310 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079314 | orchestrator | 2025-08-29 15:01:29.079318 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:01:29.079323 | orchestrator | Friday 29 August 2025 15:01:06 +0000 (0:00:14.109) 0:03:03.537 ********* 2025-08-29 15:01:29.079327 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.079332 | orchestrator | 2025-08-29 15:01:29.079336 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:01:29.079340 | orchestrator | Friday 29 August 2025 15:01:11 +0000 (0:00:05.604) 0:03:09.141 ********* 2025-08-29 15:01:29.079345 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.079349 | orchestrator | 2025-08-29 15:01:29.079354 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 15:01:29.079358 | orchestrator | 2025-08-29 15:01:29.079361 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 15:01:29.079365 | orchestrator | Friday 29 August 2025 15:01:14 +0000 (0:00:02.617) 0:03:11.759 ********* 2025-08-29 15:01:29.079369 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:29.079373 | orchestrator | 2025-08-29 15:01:29.079377 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 15:01:29.079380 | orchestrator | Friday 29 August 2025 15:01:15 +0000 (0:00:00.680) 0:03:12.440 ********* 2025-08-29 15:01:29.079384 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.079388 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079392 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079396 | orchestrator | 2025-08-29 15:01:29.079400 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 15:01:29.079404 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:02.384) 0:03:14.824 ********* 2025-08-29 15:01:29.079411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.079417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079426 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079434 | orchestrator | 2025-08-29 15:01:29.079447 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 15:01:29.079453 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:02.138) 0:03:16.962 ********* 2025-08-29 15:01:29.079459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.079466 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079472 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079478 | orchestrator | 2025-08-29 15:01:29.079485 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 15:01:29.079494 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:02.320) 0:03:19.283 ********* 2025-08-29 15:01:29.079504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.079513 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079523 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:29.079533 | orchestrator | 2025-08-29 15:01:29.079542 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 15:01:29.079554 | orchestrator | Friday 29 August 2025 15:01:24 +0000 (0:00:02.066) 0:03:21.350 ********* 2025-08-29 15:01:29.079561 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:29.079567 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:29.079574 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:29.079580 | orchestrator | 2025-08-29 15:01:29.079586 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 15:01:29.079593 | orchestrator | Friday 29 August 2025 15:01:27 +0000 (0:00:03.848) 0:03:25.199 ********* 2025-08-29 15:01:29.079599 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:29.079605 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:29.079612 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:29.079618 | orchestrator | 2025-08-29 15:01:29.079624 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:01:29.079630 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 15:01:29.079637 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 15:01:29.079650 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:01:29.079657 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:01:29.079663 | orchestrator | 2025-08-29 15:01:29.079669 | orchestrator | 2025-08-29 15:01:29.079676 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:01:29.079683 | orchestrator | Friday 29 August 2025 15:01:28 +0000 (0:00:00.261) 0:03:25.460 ********* 2025-08-29 15:01:29.079689 | orchestrator | =============================================================================== 2025-08-29 15:01:29.079696 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.37s 2025-08-29 15:01:29.079703 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.37s 2025-08-29 15:01:29.079715 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.11s 2025-08-29 15:01:29.079722 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-08-29 15:01:29.079728 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.71s 2025-08-29 15:01:29.079734 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.74s 2025-08-29 15:01:29.079740 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 6.36s 2025-08-29 15:01:29.079747 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.60s 2025-08-29 15:01:29.079753 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.21s 2025-08-29 15:01:29.079787 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.07s 2025-08-29 15:01:29.079799 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.97s 2025-08-29 15:01:29.079805 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.85s 2025-08-29 15:01:29.079812 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.45s 2025-08-29 15:01:29.079818 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.22s 2025-08-29 15:01:29.079824 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.14s 2025-08-29 15:01:29.079830 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.94s 2025-08-29 15:01:29.079836 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2025-08-29 15:01:29.079842 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.69s 2025-08-29 15:01:29.079847 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.62s 2025-08-29 15:01:29.079853 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.38s 2025-08-29 15:01:29.079859 | orchestrator | 2025-08-29 15:01:29 | INFO  | Task a5807d78-e74d-404c-aee3-5ce497b575d8 is in state SUCCESS 2025-08-29 15:01:29.079865 | orchestrator | 2025-08-29 15:01:29 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:29.079871 | orchestrator | 2025-08-29 15:01:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:32.132095 | orchestrator | 2025-08-29 15:01:32 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:32.135689 | orchestrator | 2025-08-29 15:01:32 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:32.138275 | orchestrator | 2025-08-29 15:01:32 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:32.139040 | orchestrator | 2025-08-29 15:01:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:35.192958 | orchestrator | 2025-08-29 15:01:35 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:35.193687 | orchestrator | 2025-08-29 15:01:35 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:35.195173 | orchestrator | 2025-08-29 15:01:35 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:35.195642 | orchestrator | 2025-08-29 15:01:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:38.241058 | orchestrator | 2025-08-29 15:01:38 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:38.243554 | orchestrator | 2025-08-29 15:01:38 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:38.245069 | orchestrator | 2025-08-29 15:01:38 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:38.245102 | orchestrator | 2025-08-29 15:01:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:41.286920 | orchestrator | 2025-08-29 15:01:41 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:41.288640 | orchestrator | 2025-08-29 15:01:41 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:41.289864 | orchestrator | 2025-08-29 15:01:41 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:41.290095 | orchestrator | 2025-08-29 15:01:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:44.330079 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:44.331117 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:44.332497 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:44.332552 | orchestrator | 2025-08-29 15:01:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:47.374586 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:47.376629 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:47.377228 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:47.377251 | orchestrator | 2025-08-29 15:01:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:50.421575 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:50.422962 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:50.426269 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:50.426542 | orchestrator | 2025-08-29 15:01:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:53.477326 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:53.477982 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:53.479150 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:53.479193 | orchestrator | 2025-08-29 15:01:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:56.519777 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:56.520335 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:56.521823 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:56.521873 | orchestrator | 2025-08-29 15:01:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:59.565814 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:01:59.565905 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:01:59.567808 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:01:59.567828 | orchestrator | 2025-08-29 15:01:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:02.602209 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:02.602473 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:02.603561 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:02.603593 | orchestrator | 2025-08-29 15:02:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:05.647305 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:05.648606 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:05.649563 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:05.649591 | orchestrator | 2025-08-29 15:02:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:08.689970 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:08.690298 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:08.691566 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:08.691602 | orchestrator | 2025-08-29 15:02:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:11.739762 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:11.739864 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:11.740647 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:11.740673 | orchestrator | 2025-08-29 15:02:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:14.793549 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:14.793652 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:14.794852 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:14.794904 | orchestrator | 2025-08-29 15:02:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:17.857177 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:17.860511 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:17.862916 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:17.862957 | orchestrator | 2025-08-29 15:02:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:20.915408 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:20.916937 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:20.920440 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:20.920485 | orchestrator | 2025-08-29 15:02:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:23.961854 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:23.963996 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:23.965514 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:23.965549 | orchestrator | 2025-08-29 15:02:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:27.013608 | orchestrator | 2025-08-29 15:02:27 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:27.016198 | orchestrator | 2025-08-29 15:02:27 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:27.026184 | orchestrator | 2025-08-29 15:02:27 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:27.026757 | orchestrator | 2025-08-29 15:02:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:30.071530 | orchestrator | 2025-08-29 15:02:30 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:30.073762 | orchestrator | 2025-08-29 15:02:30 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:30.073828 | orchestrator | 2025-08-29 15:02:30 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:30.073841 | orchestrator | 2025-08-29 15:02:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:33.131141 | orchestrator | 2025-08-29 15:02:33 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:33.131909 | orchestrator | 2025-08-29 15:02:33 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:33.133924 | orchestrator | 2025-08-29 15:02:33 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:33.133973 | orchestrator | 2025-08-29 15:02:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:36.175829 | orchestrator | 2025-08-29 15:02:36 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:36.178554 | orchestrator | 2025-08-29 15:02:36 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state STARTED 2025-08-29 15:02:36.181867 | orchestrator | 2025-08-29 15:02:36 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:36.181929 | orchestrator | 2025-08-29 15:02:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:39.239195 | orchestrator | 2025-08-29 15:02:39 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:39.241397 | orchestrator | 2025-08-29 15:02:39 | INFO  | Task 64639414-551b-4fe7-8af6-cdc100526622 is in state SUCCESS 2025-08-29 15:02:39.243549 | orchestrator | 2025-08-29 15:02:39.243673 | orchestrator | 2025-08-29 15:02:39.243853 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 15:02:39.243872 | orchestrator | 2025-08-29 15:02:39.244375 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:02:39.244395 | orchestrator | Friday 29 August 2025 15:00:21 +0000 (0:00:00.634) 0:00:00.634 ********* 2025-08-29 15:02:39.244407 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:02:39.244418 | orchestrator | 2025-08-29 15:02:39.244429 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:02:39.244440 | orchestrator | Friday 29 August 2025 15:00:21 +0000 (0:00:00.656) 0:00:01.291 ********* 2025-08-29 15:02:39.244452 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244463 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244474 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.244485 | orchestrator | 2025-08-29 15:02:39.244496 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:02:39.244507 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.682) 0:00:01.973 ********* 2025-08-29 15:02:39.244517 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244529 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244539 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.244550 | orchestrator | 2025-08-29 15:02:39.244561 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:02:39.244572 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.297) 0:00:02.271 ********* 2025-08-29 15:02:39.244584 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244595 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244606 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.244616 | orchestrator | 2025-08-29 15:02:39.244654 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:02:39.244666 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.855) 0:00:03.127 ********* 2025-08-29 15:02:39.244677 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244716 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244727 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.244738 | orchestrator | 2025-08-29 15:02:39.244749 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:02:39.244760 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.323) 0:00:03.450 ********* 2025-08-29 15:02:39.244770 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244781 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244791 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.244802 | orchestrator | 2025-08-29 15:02:39.244813 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:02:39.244823 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.348) 0:00:03.799 ********* 2025-08-29 15:02:39.244834 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244845 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244856 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.244866 | orchestrator | 2025-08-29 15:02:39.244877 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:02:39.244888 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.369) 0:00:04.169 ********* 2025-08-29 15:02:39.244899 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.244912 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.244922 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.244933 | orchestrator | 2025-08-29 15:02:39.244944 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:02:39.244954 | orchestrator | Friday 29 August 2025 15:00:25 +0000 (0:00:00.522) 0:00:04.691 ********* 2025-08-29 15:02:39.244965 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.244976 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.244990 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.245001 | orchestrator | 2025-08-29 15:02:39.245015 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:02:39.245027 | orchestrator | Friday 29 August 2025 15:00:25 +0000 (0:00:00.323) 0:00:05.015 ********* 2025-08-29 15:02:39.245040 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:39.245057 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:39.245075 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:39.245094 | orchestrator | 2025-08-29 15:02:39.245122 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:02:39.245144 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:00.725) 0:00:05.740 ********* 2025-08-29 15:02:39.245163 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.245183 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.245204 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.245224 | orchestrator | 2025-08-29 15:02:39.245245 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:02:39.245265 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:00.451) 0:00:06.191 ********* 2025-08-29 15:02:39.245278 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:39.245289 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:39.245300 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:39.245311 | orchestrator | 2025-08-29 15:02:39.245337 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:02:39.245349 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:02.374) 0:00:08.566 ********* 2025-08-29 15:02:39.245360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:02:39.245385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:02:39.245397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:02:39.245407 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.245418 | orchestrator | 2025-08-29 15:02:39.245429 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:02:39.245494 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.499) 0:00:09.065 ********* 2025-08-29 15:02:39.245509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.245524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.245535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.245546 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.245557 | orchestrator | 2025-08-29 15:02:39.245568 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:02:39.245579 | orchestrator | Friday 29 August 2025 15:00:30 +0000 (0:00:01.027) 0:00:10.092 ********* 2025-08-29 15:02:39.245592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.245606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.245618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.245629 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.245640 | orchestrator | 2025-08-29 15:02:39.245651 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:02:39.245662 | orchestrator | Friday 29 August 2025 15:00:30 +0000 (0:00:00.160) 0:00:10.253 ********* 2025-08-29 15:02:39.245676 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bdd79b4e6c8a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 15:00:27.302974', 'end': '2025-08-29 15:00:27.343374', 'delta': '0:00:00.040400', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bdd79b4e6c8a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 15:02:39.245742 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ff941c87f5d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 15:00:28.100157', 'end': '2025-08-29 15:00:28.155180', 'delta': '0:00:00.055023', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ff941c87f5d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 15:02:39.245789 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6d4c69d946df', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 15:00:28.747437', 'end': '2025-08-29 15:00:28.793502', 'delta': '0:00:00.046065', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6d4c69d946df'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 15:02:39.245828 | orchestrator | 2025-08-29 15:02:39.245839 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:02:39.245850 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.430) 0:00:10.684 ********* 2025-08-29 15:02:39.245861 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.245874 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.245894 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.245915 | orchestrator | 2025-08-29 15:02:39.245942 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:02:39.245962 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.466) 0:00:11.151 ********* 2025-08-29 15:02:39.245981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 15:02:39.246001 | orchestrator | 2025-08-29 15:02:39.246113 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:02:39.246141 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:01.917) 0:00:13.068 ********* 2025-08-29 15:02:39.246161 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246180 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.246200 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.246220 | orchestrator | 2025-08-29 15:02:39.246241 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:02:39.246255 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:00.340) 0:00:13.408 ********* 2025-08-29 15:02:39.246266 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246276 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.246287 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.246298 | orchestrator | 2025-08-29 15:02:39.246308 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:02:39.246319 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:00.451) 0:00:13.860 ********* 2025-08-29 15:02:39.246330 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.246351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.246366 | orchestrator | 2025-08-29 15:02:39.246384 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:02:39.246400 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:00.524) 0:00:14.384 ********* 2025-08-29 15:02:39.246415 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.246431 | orchestrator | 2025-08-29 15:02:39.246630 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:02:39.246649 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:00.131) 0:00:14.516 ********* 2025-08-29 15:02:39.246659 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246670 | orchestrator | 2025-08-29 15:02:39.246700 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:02:39.246713 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.275) 0:00:14.791 ********* 2025-08-29 15:02:39.246723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246734 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.246745 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.246755 | orchestrator | 2025-08-29 15:02:39.246767 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:02:39.246777 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.365) 0:00:15.157 ********* 2025-08-29 15:02:39.246788 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246819 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.246830 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.246841 | orchestrator | 2025-08-29 15:02:39.246852 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:02:39.246862 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.416) 0:00:15.573 ********* 2025-08-29 15:02:39.246881 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.246911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.246930 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.246950 | orchestrator | 2025-08-29 15:02:39.246969 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:02:39.246987 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:00.567) 0:00:16.141 ********* 2025-08-29 15:02:39.247006 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.247022 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.247033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.247044 | orchestrator | 2025-08-29 15:02:39.247055 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:02:39.247065 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:00.351) 0:00:16.492 ********* 2025-08-29 15:02:39.247076 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.247096 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.247107 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.247118 | orchestrator | 2025-08-29 15:02:39.247129 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:02:39.247139 | orchestrator | Friday 29 August 2025 15:00:37 +0000 (0:00:00.369) 0:00:16.862 ********* 2025-08-29 15:02:39.247150 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.247160 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.247171 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.247182 | orchestrator | 2025-08-29 15:02:39.247193 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:02:39.247255 | orchestrator | Friday 29 August 2025 15:00:37 +0000 (0:00:00.390) 0:00:17.252 ********* 2025-08-29 15:02:39.247269 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.247282 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.247294 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.247306 | orchestrator | 2025-08-29 15:02:39.247319 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:02:39.247331 | orchestrator | Friday 29 August 2025 15:00:38 +0000 (0:00:00.549) 0:00:17.802 ********* 2025-08-29 15:02:39.247345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9', 'dm-uuid-LVM-PSPH14GH09J0kA1RbVItmiOZIquYOG3k2u45bJBRBaFh2iYJjIb15CODRaJofD86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0', 'dm-uuid-LVM-O97dyzANmDt8UDhQLEsHrELT6wzi4qzyFe25sty9BiB38XEHjGmceShZKbZzbbST'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.247565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LMClOH-HTcX-urtS-wxv0-f3dv-LUYl-ccXnxv', 'scsi-0QEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d', 'scsi-SQEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.247611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XZyA7j-qZag-aiwL-kgFz-9mi5-BfHq-dkJ9GF', 'scsi-0QEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e', 'scsi-SQEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.247633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba', 'scsi-SQEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.247645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12', 'dm-uuid-LVM-PcnBM91jI969xeG2G7spnVSuPPQuboI2IdFfTcCUsQNwKETonsuK6rNQiC1GDbRm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.247668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63', 'dm-uuid-LVM-nQqFVEZid8ujOgcFssAfSQAYM1cLhlhU1nnw3Phm825bc5saJyUoGvtqQH3idFV2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247919 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.247937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.247980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LAiBWk-w2Ea-22E3-i5Oe-rKKc-qMf9-rDzPwx', 'scsi-0QEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8', 'scsi-SQEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uJTqqQ-Lt2i-w1fm-dhFc-ptEh-BsCY-RCX3FH', 'scsi-0QEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048', 'scsi-SQEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008', 'scsi-SQEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248090 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.248109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c', 'dm-uuid-LVM-dQwh4LB0g1qzbRKP9aVHn3E0vVB9cJBFvaYV1oXfrl50GIhqBubQZQbYq24RSU4B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec', 'dm-uuid-LVM-JXf4c6esfPqDz0wrFTQC8LaNYTcXKZDr2ceiUQy0TONpn0mSquCdR1hAyIo2oDVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:39.248352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oNGlDP-zHil-MfZ0-dBFL-53J0-JNRG-FM0WkY', 'scsi-0QEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166', 'scsi-SQEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EKS2Kg-ziXY-QzeE-q2JM-mBsR-U4k8-6dgOfS', 'scsi-0QEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf', 'scsi-SQEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a', 'scsi-SQEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:39.248490 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.248501 | orchestrator | 2025-08-29 15:02:39.248513 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:02:39.248524 | orchestrator | Friday 29 August 2025 15:00:38 +0000 (0:00:00.717) 0:00:18.519 ********* 2025-08-29 15:02:39.248536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9', 'dm-uuid-LVM-PSPH14GH09J0kA1RbVItmiOZIquYOG3k2u45bJBRBaFh2iYJjIb15CODRaJofD86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0', 'dm-uuid-LVM-O97dyzANmDt8UDhQLEsHrELT6wzi4qzyFe25sty9BiB38XEHjGmceShZKbZzbbST'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12', 'dm-uuid-LVM-PcnBM91jI969xeG2G7spnVSuPPQuboI2IdFfTcCUsQNwKETonsuK6rNQiC1GDbRm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b128c3ba-6808-4295-9988-e02b5b112f5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--95143370--f7d7--5ec5--ad3d--8af7ad027df9-osd--block--95143370--f7d7--5ec5--ad3d--8af7ad027df9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LMClOH-HTcX-urtS-wxv0-f3dv-LUYl-ccXnxv', 'scsi-0QEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d', 'scsi-SQEMU_QEMU_HARDDISK_828a38b3-3187-4328-b10e-4e827af3391d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63', 'dm-uuid-LVM-nQqFVEZid8ujOgcFssAfSQAYM1cLhlhU1nnw3Phm825bc5saJyUoGvtqQH3idFV2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a5a082ef--4dec--5d63--a984--4d3e57643ca0-osd--block--a5a082ef--4dec--5d63--a984--4d3e57643ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XZyA7j-qZag-aiwL-kgFz-9mi5-BfHq-dkJ9GF', 'scsi-0QEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e', 'scsi-SQEMU_QEMU_HARDDISK_f5a1fb11-e928-478a-853f-ace275f9637e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba', 'scsi-SQEMU_QEMU_HARDDISK_5d364b5e-b4d2-47dd-94e6-90a734de67ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248864 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.248881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.248905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249025 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c', 'dm-uuid-LVM-dQwh4LB0g1qzbRKP9aVHn3E0vVB9cJBFvaYV1oXfrl50GIhqBubQZQbYq24RSU4B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7906b42-e75b-4add-b229-0ba1b6d7cbfd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec', 'dm-uuid-LVM-JXf4c6esfPqDz0wrFTQC8LaNYTcXKZDr2ceiUQy0TONpn0mSquCdR1hAyIo2oDVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2496fa80--0e44--5b7b--b63b--c9ee5061ab12-osd--block--2496fa80--0e44--5b7b--b63b--c9ee5061ab12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LAiBWk-w2Ea-22E3-i5Oe-rKKc-qMf9-rDzPwx', 'scsi-0QEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8', 'scsi-SQEMU_QEMU_HARDDISK_b4efbef4-6c99-40ea-a6ec-b5ce29198be8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63-osd--block--b3a0840c--f726--58e7--9fb9--c9f22cb6ab63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uJTqqQ-Lt2i-w1fm-dhFc-ptEh-BsCY-RCX3FH', 'scsi-0QEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048', 'scsi-SQEMU_QEMU_HARDDISK_aab0ecd9-fbf7-4fe6-a323-3b8ec1fc1048'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008', 'scsi-SQEMU_QEMU_HARDDISK_112d60a2-5c53-4704-85f0-10fd2a98c008'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249358 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249370 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.249381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce825d14-8cf3-46c0-a5a2-f8443242ecf0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bf1413fe--a30b--500c--b995--d4125007de3c-osd--block--bf1413fe--a30b--500c--b995--d4125007de3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oNGlDP-zHil-MfZ0-dBFL-53J0-JNRG-FM0WkY', 'scsi-0QEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166', 'scsi-SQEMU_QEMU_HARDDISK_ad4c9021-51cf-4f71-bbdb-17cb41c45166'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e997a020--3476--50fd--bfa0--07ccf1b1c8ec-osd--block--e997a020--3476--50fd--bfa0--07ccf1b1c8ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EKS2Kg-ziXY-QzeE-q2JM-mBsR-U4k8-6dgOfS', 'scsi-0QEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf', 'scsi-SQEMU_QEMU_HARDDISK_e1c9c203-f31e-4d63-b484-525a06e6ccdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a', 'scsi-SQEMU_QEMU_HARDDISK_d852edcf-4b4a-4ec3-af84-7b9722ba068a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249519 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:39.249539 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.249565 | orchestrator | 2025-08-29 15:02:39.249586 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:02:39.249602 | orchestrator | Friday 29 August 2025 15:00:39 +0000 (0:00:00.667) 0:00:19.187 ********* 2025-08-29 15:02:39.249618 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.249635 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.249650 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.249666 | orchestrator | 2025-08-29 15:02:39.249723 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:02:39.249741 | orchestrator | Friday 29 August 2025 15:00:40 +0000 (0:00:00.760) 0:00:19.948 ********* 2025-08-29 15:02:39.249758 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.249775 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.249790 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.249808 | orchestrator | 2025-08-29 15:02:39.249825 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:02:39.249843 | orchestrator | Friday 29 August 2025 15:00:40 +0000 (0:00:00.539) 0:00:20.487 ********* 2025-08-29 15:02:39.249861 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.249878 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.249893 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.249909 | orchestrator | 2025-08-29 15:02:39.249925 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:02:39.249941 | orchestrator | Friday 29 August 2025 15:00:41 +0000 (0:00:00.670) 0:00:21.158 ********* 2025-08-29 15:02:39.250266 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.250295 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.250333 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.250344 | orchestrator | 2025-08-29 15:02:39.250355 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:02:39.250366 | orchestrator | Friday 29 August 2025 15:00:41 +0000 (0:00:00.321) 0:00:21.479 ********* 2025-08-29 15:02:39.250377 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.250388 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.250399 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.250409 | orchestrator | 2025-08-29 15:02:39.250420 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:02:39.250431 | orchestrator | Friday 29 August 2025 15:00:42 +0000 (0:00:00.539) 0:00:22.019 ********* 2025-08-29 15:02:39.250441 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.250452 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.250462 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.250473 | orchestrator | 2025-08-29 15:02:39.250484 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:02:39.250495 | orchestrator | Friday 29 August 2025 15:00:43 +0000 (0:00:00.667) 0:00:22.686 ********* 2025-08-29 15:02:39.250505 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:02:39.250517 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:02:39.250528 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:02:39.250539 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:02:39.250549 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:02:39.250560 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:02:39.250571 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:02:39.250581 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:02:39.250592 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:02:39.250603 | orchestrator | 2025-08-29 15:02:39.250614 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:02:39.250624 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:00.978) 0:00:23.665 ********* 2025-08-29 15:02:39.250635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:02:39.250646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:02:39.250750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:02:39.250767 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.250778 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:02:39.250788 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:02:39.250799 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:02:39.250809 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.250820 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:02:39.250831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:02:39.250841 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:02:39.250852 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.250863 | orchestrator | 2025-08-29 15:02:39.250874 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:02:39.250884 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:00.429) 0:00:24.094 ********* 2025-08-29 15:02:39.250904 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:02:39.250915 | orchestrator | 2025-08-29 15:02:39.250926 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:02:39.250939 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:00.872) 0:00:24.967 ********* 2025-08-29 15:02:39.250949 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.250977 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.251002 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.251023 | orchestrator | 2025-08-29 15:02:39.251057 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:02:39.251074 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:00.346) 0:00:25.314 ********* 2025-08-29 15:02:39.251092 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.251108 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.251127 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.251145 | orchestrator | 2025-08-29 15:02:39.251162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:02:39.251180 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.320) 0:00:25.635 ********* 2025-08-29 15:02:39.251200 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.251218 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.251236 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:39.251252 | orchestrator | 2025-08-29 15:02:39.251263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:02:39.251274 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.426) 0:00:26.061 ********* 2025-08-29 15:02:39.251285 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.251295 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.251308 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.251326 | orchestrator | 2025-08-29 15:02:39.251343 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:02:39.251362 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:00.851) 0:00:26.912 ********* 2025-08-29 15:02:39.251380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:02:39.251398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:02:39.251417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:02:39.251436 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.251456 | orchestrator | 2025-08-29 15:02:39.251476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:02:39.251496 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:00.544) 0:00:27.457 ********* 2025-08-29 15:02:39.251508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:02:39.251520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:02:39.251532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:02:39.251544 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.251556 | orchestrator | 2025-08-29 15:02:39.251567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:02:39.251580 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.481) 0:00:27.939 ********* 2025-08-29 15:02:39.251592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:02:39.251603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:02:39.251616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:02:39.251629 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.251639 | orchestrator | 2025-08-29 15:02:39.251650 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:02:39.251661 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.415) 0:00:28.355 ********* 2025-08-29 15:02:39.251672 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:39.251714 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:39.251727 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:39.251738 | orchestrator | 2025-08-29 15:02:39.251748 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:02:39.251759 | orchestrator | Friday 29 August 2025 15:00:49 +0000 (0:00:00.429) 0:00:28.784 ********* 2025-08-29 15:02:39.251770 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:02:39.251780 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:02:39.251802 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:02:39.251812 | orchestrator | 2025-08-29 15:02:39.251823 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:02:39.251833 | orchestrator | Friday 29 August 2025 15:00:49 +0000 (0:00:00.642) 0:00:29.427 ********* 2025-08-29 15:02:39.251844 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:39.251855 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:39.251866 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:39.251876 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:02:39.251887 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:02:39.251898 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:02:39.251909 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:02:39.251921 | orchestrator | 2025-08-29 15:02:39.251939 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:02:39.251967 | orchestrator | Friday 29 August 2025 15:00:51 +0000 (0:00:01.290) 0:00:30.717 ********* 2025-08-29 15:02:39.251985 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:39.252002 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:39.252028 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:39.252045 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:02:39.252062 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:02:39.252079 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:02:39.252097 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:02:39.252112 | orchestrator | 2025-08-29 15:02:39.252141 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 15:02:39.252159 | orchestrator | Friday 29 August 2025 15:00:53 +0000 (0:00:02.355) 0:00:33.073 ********* 2025-08-29 15:02:39.252176 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:39.252196 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:39.252215 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 15:02:39.252232 | orchestrator | 2025-08-29 15:02:39.252249 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 15:02:39.252266 | orchestrator | Friday 29 August 2025 15:00:53 +0000 (0:00:00.489) 0:00:33.562 ********* 2025-08-29 15:02:39.252288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:39.252308 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:39.252326 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:39.252337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:39.252359 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:39.252370 | orchestrator | 2025-08-29 15:02:39.252381 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 15:02:39.252392 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:46.000) 0:01:19.563 ********* 2025-08-29 15:02:39.252403 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252414 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252424 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252445 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252456 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252466 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:02:39.252477 | orchestrator | 2025-08-29 15:02:39.252488 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 15:02:39.252498 | orchestrator | Friday 29 August 2025 15:02:04 +0000 (0:00:24.894) 0:01:44.457 ********* 2025-08-29 15:02:39.252509 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252519 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252530 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252540 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252551 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252561 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252572 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:02:39.252582 | orchestrator | 2025-08-29 15:02:39.252593 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 15:02:39.252604 | orchestrator | Friday 29 August 2025 15:02:17 +0000 (0:00:12.528) 0:01:56.986 ********* 2025-08-29 15:02:39.252614 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252631 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:39.252642 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:39.252653 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252663 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:39.252674 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:39.252725 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252738 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:39.252749 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:39.252759 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252770 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:39.252781 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:39.252801 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252812 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:39.252822 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:39.252833 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:39.252843 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:39.252854 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:39.252865 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 15:02:39.252876 | orchestrator | 2025-08-29 15:02:39.252886 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:02:39.252898 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 15:02:39.252910 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 15:02:39.252922 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 15:02:39.252933 | orchestrator | 2025-08-29 15:02:39.252944 | orchestrator | 2025-08-29 15:02:39.252954 | orchestrator | 2025-08-29 15:02:39.252965 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:02:39.252976 | orchestrator | Friday 29 August 2025 15:02:36 +0000 (0:00:19.186) 0:02:16.172 ********* 2025-08-29 15:02:39.252986 | orchestrator | =============================================================================== 2025-08-29 15:02:39.252997 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.00s 2025-08-29 15:02:39.253008 | orchestrator | generate keys ---------------------------------------------------------- 24.89s 2025-08-29 15:02:39.253019 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.19s 2025-08-29 15:02:39.253029 | orchestrator | get keys from monitors ------------------------------------------------- 12.53s 2025-08-29 15:02:39.253040 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.37s 2025-08-29 15:02:39.253050 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.36s 2025-08-29 15:02:39.253061 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.92s 2025-08-29 15:02:39.253072 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.29s 2025-08-29 15:02:39.253083 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.03s 2025-08-29 15:02:39.253093 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.98s 2025-08-29 15:02:39.253104 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.87s 2025-08-29 15:02:39.253115 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2025-08-29 15:02:39.253125 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.85s 2025-08-29 15:02:39.253136 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.76s 2025-08-29 15:02:39.253146 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-08-29 15:02:39.253157 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.72s 2025-08-29 15:02:39.253168 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-08-29 15:02:39.253178 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2025-08-29 15:02:39.253189 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2025-08-29 15:02:39.253200 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.67s 2025-08-29 15:02:39.253211 | orchestrator | 2025-08-29 15:02:39 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:39.253235 | orchestrator | 2025-08-29 15:02:39 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:39.253247 | orchestrator | 2025-08-29 15:02:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:42.312357 | orchestrator | 2025-08-29 15:02:42 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:42.316193 | orchestrator | 2025-08-29 15:02:42 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:42.320053 | orchestrator | 2025-08-29 15:02:42 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:42.320121 | orchestrator | 2025-08-29 15:02:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:45.374988 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:45.375509 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:45.377603 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:45.377661 | orchestrator | 2025-08-29 15:02:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:48.436399 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:48.438470 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:48.440592 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:48.440734 | orchestrator | 2025-08-29 15:02:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:51.492509 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:51.493386 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:51.495452 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:51.495486 | orchestrator | 2025-08-29 15:02:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:54.547939 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:54.550274 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:54.551780 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:54.551833 | orchestrator | 2025-08-29 15:02:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:57.607941 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:02:57.608054 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:02:57.608743 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:02:57.608777 | orchestrator | 2025-08-29 15:02:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:00.668503 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:00.671303 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:03:00.673295 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:00.673371 | orchestrator | 2025-08-29 15:03:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:03.728067 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:03.730146 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:03:03.733548 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:03.733576 | orchestrator | 2025-08-29 15:03:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:06.797010 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:06.799450 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:03:06.801216 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:06.801365 | orchestrator | 2025-08-29 15:03:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:09.851819 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:09.854009 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:03:09.856500 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:09.856551 | orchestrator | 2025-08-29 15:03:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:12.906298 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:12.908532 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state STARTED 2025-08-29 15:03:12.910517 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:12.910565 | orchestrator | 2025-08-29 15:03:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:15.976164 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:15.977874 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:15.979505 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task 4920a805-e5f6-41db-95e2-11e889a5c96d is in state SUCCESS 2025-08-29 15:03:15.981866 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:15.982243 | orchestrator | 2025-08-29 15:03:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:19.042608 | orchestrator | 2025-08-29 15:03:19 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:19.043352 | orchestrator | 2025-08-29 15:03:19 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:19.045575 | orchestrator | 2025-08-29 15:03:19 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:19.045622 | orchestrator | 2025-08-29 15:03:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:22.088147 | orchestrator | 2025-08-29 15:03:22 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:22.089287 | orchestrator | 2025-08-29 15:03:22 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:22.089844 | orchestrator | 2025-08-29 15:03:22 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:22.090233 | orchestrator | 2025-08-29 15:03:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:25.143073 | orchestrator | 2025-08-29 15:03:25 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:25.146003 | orchestrator | 2025-08-29 15:03:25 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:25.149595 | orchestrator | 2025-08-29 15:03:25 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:25.150769 | orchestrator | 2025-08-29 15:03:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:28.200886 | orchestrator | 2025-08-29 15:03:28 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:28.202467 | orchestrator | 2025-08-29 15:03:28 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:28.204826 | orchestrator | 2025-08-29 15:03:28 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:28.204861 | orchestrator | 2025-08-29 15:03:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:31.256529 | orchestrator | 2025-08-29 15:03:31 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:31.259699 | orchestrator | 2025-08-29 15:03:31 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state STARTED 2025-08-29 15:03:31.263736 | orchestrator | 2025-08-29 15:03:31 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:31.263781 | orchestrator | 2025-08-29 15:03:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:34.306295 | orchestrator | 2025-08-29 15:03:34 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:34.309197 | orchestrator | 2025-08-29 15:03:34 | INFO  | Task 7a251464-51af-4628-949c-3625739b7a56 is in state SUCCESS 2025-08-29 15:03:34.310594 | orchestrator | 2025-08-29 15:03:34.310679 | orchestrator | 2025-08-29 15:03:34.310694 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 15:03:34.310707 | orchestrator | 2025-08-29 15:03:34.310809 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 15:03:34.310823 | orchestrator | Friday 29 August 2025 15:02:43 +0000 (0:00:00.204) 0:00:00.204 ********* 2025-08-29 15:03:34.310835 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 15:03:34.310847 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.310859 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311198 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:03:34.311228 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311246 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 15:03:34.311265 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 15:03:34.311285 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:03:34.311302 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 15:03:34.311320 | orchestrator | 2025-08-29 15:03:34.311339 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 15:03:34.311388 | orchestrator | Friday 29 August 2025 15:02:47 +0000 (0:00:04.610) 0:00:04.815 ********* 2025-08-29 15:03:34.311401 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 15:03:34.311412 | orchestrator | 2025-08-29 15:03:34.311423 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 15:03:34.311434 | orchestrator | Friday 29 August 2025 15:02:48 +0000 (0:00:01.122) 0:00:05.938 ********* 2025-08-29 15:03:34.311445 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 15:03:34.311456 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311467 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311478 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:03:34.311489 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311500 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 15:03:34.311510 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 15:03:34.311521 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:03:34.311532 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 15:03:34.311542 | orchestrator | 2025-08-29 15:03:34.311553 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 15:03:34.311564 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:16.027) 0:00:21.966 ********* 2025-08-29 15:03:34.311575 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 15:03:34.311586 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311596 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311607 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:03:34.311618 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:34.311659 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 15:03:34.311670 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 15:03:34.311681 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:03:34.311692 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 15:03:34.311703 | orchestrator | 2025-08-29 15:03:34.311713 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:34.311728 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:03:34.311747 | orchestrator | 2025-08-29 15:03:34.311765 | orchestrator | 2025-08-29 15:03:34.311785 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:34.311801 | orchestrator | Friday 29 August 2025 15:03:13 +0000 (0:00:08.136) 0:00:30.102 ********* 2025-08-29 15:03:34.311812 | orchestrator | =============================================================================== 2025-08-29 15:03:34.311823 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.03s 2025-08-29 15:03:34.311834 | orchestrator | Write ceph keys to the configuration directory -------------------------- 8.14s 2025-08-29 15:03:34.311844 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.61s 2025-08-29 15:03:34.311855 | orchestrator | Create share directory -------------------------------------------------- 1.12s 2025-08-29 15:03:34.311865 | orchestrator | 2025-08-29 15:03:34.311876 | orchestrator | 2025-08-29 15:03:34.311901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:03:34.311912 | orchestrator | 2025-08-29 15:03:34.311938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:03:34.311959 | orchestrator | Friday 29 August 2025 15:01:34 +0000 (0:00:00.440) 0:00:00.440 ********* 2025-08-29 15:03:34.311970 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.311981 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.311992 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.312003 | orchestrator | 2025-08-29 15:03:34.312014 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:03:34.312025 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.411) 0:00:00.851 ********* 2025-08-29 15:03:34.312036 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 15:03:34.312047 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 15:03:34.312058 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 15:03:34.312069 | orchestrator | 2025-08-29 15:03:34.312080 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 15:03:34.312091 | orchestrator | 2025-08-29 15:03:34.312102 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:34.312113 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.550) 0:00:01.402 ********* 2025-08-29 15:03:34.312123 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:34.312134 | orchestrator | 2025-08-29 15:03:34.312145 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 15:03:34.312156 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:00.631) 0:00:02.033 ********* 2025-08-29 15:03:34.312221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.312294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.312339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.312370 | orchestrator | 2025-08-29 15:03:34.312389 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 15:03:34.312407 | orchestrator | Friday 29 August 2025 15:01:37 +0000 (0:00:01.428) 0:00:03.462 ********* 2025-08-29 15:03:34.312427 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.312448 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.312468 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.312483 | orchestrator | 2025-08-29 15:03:34.312494 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:34.312505 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:00.511) 0:00:03.973 ********* 2025-08-29 15:03:34.312516 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:03:34.312542 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:03:34.312554 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:03:34.312565 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:03:34.312575 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:03:34.312586 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:03:34.312596 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:03:34.312607 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:03:34.312618 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:03:34.312664 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:03:34.312680 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:03:34.312696 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:03:34.312712 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:03:34.312732 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:03:34.312750 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:03:34.312764 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:03:34.312775 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:03:34.312786 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:03:34.312796 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:03:34.312807 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:03:34.312818 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:03:34.312828 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:03:34.312839 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:03:34.312850 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:03:34.312862 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 15:03:34.312875 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 15:03:34.312886 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 15:03:34.312906 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 15:03:34.312917 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 15:03:34.312928 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 15:03:34.312938 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 15:03:34.312949 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 15:03:34.312960 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 15:03:34.312971 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 15:03:34.312982 | orchestrator | 2025-08-29 15:03:34.312993 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.313004 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.864) 0:00:04.838 ********* 2025-08-29 15:03:34.313015 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.313025 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.313036 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.313047 | orchestrator | 2025-08-29 15:03:34.313058 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.313069 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.351) 0:00:05.190 ********* 2025-08-29 15:03:34.313080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313091 | orchestrator | 2025-08-29 15:03:34.313114 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.313125 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.153) 0:00:05.343 ********* 2025-08-29 15:03:34.313136 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313147 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.313158 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.313170 | orchestrator | 2025-08-29 15:03:34.313180 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.313191 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.654) 0:00:05.998 ********* 2025-08-29 15:03:34.313202 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.313213 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.313224 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.313235 | orchestrator | 2025-08-29 15:03:34.313246 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.313257 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.413) 0:00:06.411 ********* 2025-08-29 15:03:34.313267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313278 | orchestrator | 2025-08-29 15:03:34.313289 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.313300 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.156) 0:00:06.568 ********* 2025-08-29 15:03:34.313311 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313322 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.313333 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.313343 | orchestrator | 2025-08-29 15:03:34.313354 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.313365 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.312) 0:00:06.881 ********* 2025-08-29 15:03:34.313376 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.313395 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.313406 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.313417 | orchestrator | 2025-08-29 15:03:34.313428 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.313439 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.347) 0:00:07.229 ********* 2025-08-29 15:03:34.313450 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313461 | orchestrator | 2025-08-29 15:03:34.313471 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.313482 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.370) 0:00:07.600 ********* 2025-08-29 15:03:34.313493 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.313515 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.313525 | orchestrator | 2025-08-29 15:03:34.313536 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.313547 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.313) 0:00:07.913 ********* 2025-08-29 15:03:34.313558 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.313569 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.313580 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.313591 | orchestrator | 2025-08-29 15:03:34.313602 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.313613 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.299) 0:00:08.213 ********* 2025-08-29 15:03:34.313679 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313691 | orchestrator | 2025-08-29 15:03:34.313701 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.313712 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.119) 0:00:08.332 ********* 2025-08-29 15:03:34.313723 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.313745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.313756 | orchestrator | 2025-08-29 15:03:34.313766 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.313777 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.290) 0:00:08.623 ********* 2025-08-29 15:03:34.313788 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.313799 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.313810 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.313821 | orchestrator | 2025-08-29 15:03:34.313832 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.313842 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.562) 0:00:09.185 ********* 2025-08-29 15:03:34.313853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313864 | orchestrator | 2025-08-29 15:03:34.313875 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.313886 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.140) 0:00:09.325 ********* 2025-08-29 15:03:34.313896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.313907 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.313918 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.313929 | orchestrator | 2025-08-29 15:03:34.313939 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.313950 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.321) 0:00:09.647 ********* 2025-08-29 15:03:34.313961 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.313972 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.313983 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.313993 | orchestrator | 2025-08-29 15:03:34.314004 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.314069 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.344) 0:00:09.992 ********* 2025-08-29 15:03:34.314085 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314096 | orchestrator | 2025-08-29 15:03:34.314107 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.314127 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.135) 0:00:10.127 ********* 2025-08-29 15:03:34.314137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314148 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.314159 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.314170 | orchestrator | 2025-08-29 15:03:34.314181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.314191 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.604) 0:00:10.732 ********* 2025-08-29 15:03:34.314207 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.314226 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.314237 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.314248 | orchestrator | 2025-08-29 15:03:34.314260 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.314270 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.362) 0:00:11.094 ********* 2025-08-29 15:03:34.314281 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314292 | orchestrator | 2025-08-29 15:03:34.314303 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.314314 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.243) 0:00:11.337 ********* 2025-08-29 15:03:34.314324 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314335 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.314346 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.314357 | orchestrator | 2025-08-29 15:03:34.314368 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.314378 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.288) 0:00:11.626 ********* 2025-08-29 15:03:34.314389 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.314400 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.314411 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.314421 | orchestrator | 2025-08-29 15:03:34.314432 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.314443 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.377) 0:00:12.003 ********* 2025-08-29 15:03:34.314454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314465 | orchestrator | 2025-08-29 15:03:34.314476 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.314487 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.149) 0:00:12.153 ********* 2025-08-29 15:03:34.314497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314508 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.314519 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.314529 | orchestrator | 2025-08-29 15:03:34.314540 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.314551 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.588) 0:00:12.742 ********* 2025-08-29 15:03:34.314562 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.314573 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.314583 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.314594 | orchestrator | 2025-08-29 15:03:34.314605 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.314616 | orchestrator | Friday 29 August 2025 15:01:47 +0000 (0:00:00.333) 0:00:13.076 ********* 2025-08-29 15:03:34.314646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314657 | orchestrator | 2025-08-29 15:03:34.314668 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.314678 | orchestrator | Friday 29 August 2025 15:01:47 +0000 (0:00:00.148) 0:00:13.224 ********* 2025-08-29 15:03:34.314689 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.314710 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.314721 | orchestrator | 2025-08-29 15:03:34.314732 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:34.314785 | orchestrator | Friday 29 August 2025 15:01:47 +0000 (0:00:00.326) 0:00:13.551 ********* 2025-08-29 15:03:34.314797 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:34.314808 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:34.314819 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:34.314830 | orchestrator | 2025-08-29 15:03:34.314841 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:34.314852 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.665) 0:00:14.216 ********* 2025-08-29 15:03:34.314863 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314874 | orchestrator | 2025-08-29 15:03:34.314885 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:34.314896 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.159) 0:00:14.376 ********* 2025-08-29 15:03:34.314907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.314918 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.314928 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.314939 | orchestrator | 2025-08-29 15:03:34.314950 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 15:03:34.314961 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.360) 0:00:14.737 ********* 2025-08-29 15:03:34.314972 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:34.314983 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:34.314994 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:34.315005 | orchestrator | 2025-08-29 15:03:34.315016 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 15:03:34.315026 | orchestrator | Friday 29 August 2025 15:01:50 +0000 (0:00:01.782) 0:00:16.519 ********* 2025-08-29 15:03:34.315037 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:03:34.315048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:03:34.315059 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:03:34.315070 | orchestrator | 2025-08-29 15:03:34.315080 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 15:03:34.315091 | orchestrator | Friday 29 August 2025 15:01:53 +0000 (0:00:02.294) 0:00:18.814 ********* 2025-08-29 15:03:34.315102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:03:34.315113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:03:34.315124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:03:34.315135 | orchestrator | 2025-08-29 15:03:34.315146 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 15:03:34.315174 | orchestrator | Friday 29 August 2025 15:01:55 +0000 (0:00:02.804) 0:00:21.618 ********* 2025-08-29 15:03:34.315186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:03:34.315197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:03:34.315208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:03:34.315219 | orchestrator | 2025-08-29 15:03:34.315230 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 15:03:34.315240 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:01.869) 0:00:23.488 ********* 2025-08-29 15:03:34.315251 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.315262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.315273 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.315284 | orchestrator | 2025-08-29 15:03:34.315295 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 15:03:34.315317 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.342) 0:00:23.831 ********* 2025-08-29 15:03:34.315328 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.315339 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.315350 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.315361 | orchestrator | 2025-08-29 15:03:34.315371 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:34.315382 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.309) 0:00:24.141 ********* 2025-08-29 15:03:34.315393 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:34.315404 | orchestrator | 2025-08-29 15:03:34.315414 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 15:03:34.315425 | orchestrator | Friday 29 August 2025 15:01:59 +0000 (0:00:00.967) 0:00:25.109 ********* 2025-08-29 15:03:34.315439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.315467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.315488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.315501 | orchestrator | 2025-08-29 15:03:34.315512 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 15:03:34.315522 | orchestrator | Friday 29 August 2025 15:02:01 +0000 (0:00:01.802) 0:00:26.911 ********* 2025-08-29 15:03:34.315547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:34.315568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.315586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:34.315606 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.315679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:34.315693 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.315705 | orchestrator | 2025-08-29 15:03:34.315716 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 15:03:34.315726 | orchestrator | Friday 29 August 2025 15:02:01 +0000 (0:00:00.713) 0:00:27.624 ********* 2025-08-29 15:03:34.315753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:34.315773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.315785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:34.315797 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.315823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:34.315841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.315853 | orchestrator | 2025-08-29 15:03:34.315863 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 15:03:34.315874 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:01.314) 0:00:28.939 ********* 2025-08-29 15:03:34.315886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.315912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.315932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:34.315945 | orchestrator | 2025-08-29 15:03:34.315956 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:34.315973 | orchestrator | Friday 29 August 2025 15:02:04 +0000 (0:00:01.358) 0:00:30.297 ********* 2025-08-29 15:03:34.315984 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:34.315995 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:34.316006 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:34.316017 | orchestrator | 2025-08-29 15:03:34.316028 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:34.316038 | orchestrator | Friday 29 August 2025 15:02:04 +0000 (0:00:00.378) 0:00:30.676 ********* 2025-08-29 15:03:34.316064 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:34.316075 | orchestrator | 2025-08-29 15:03:34.316086 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 15:03:34.316097 | orchestrator | Friday 29 August 2025 15:02:05 +0000 (0:00:01.012) 0:00:31.688 ********* 2025-08-29 15:03:34.316107 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:34.316118 | orchestrator | 2025-08-29 15:03:34.316129 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 15:03:34.316140 | orchestrator | Friday 29 August 2025 15:02:08 +0000 (0:00:02.410) 0:00:34.099 ********* 2025-08-29 15:03:34.316151 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:34.316161 | orchestrator | 2025-08-29 15:03:34.316172 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 15:03:34.316183 | orchestrator | Friday 29 August 2025 15:02:10 +0000 (0:00:02.168) 0:00:36.268 ********* 2025-08-29 15:03:34.316194 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:34.316205 | orchestrator | 2025-08-29 15:03:34.316215 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:03:34.316226 | orchestrator | Friday 29 August 2025 15:02:26 +0000 (0:00:16.459) 0:00:52.728 ********* 2025-08-29 15:03:34.316237 | orchestrator | 2025-08-29 15:03:34.316248 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:03:34.316258 | orchestrator | Friday 29 August 2025 15:02:27 +0000 (0:00:00.073) 0:00:52.801 ********* 2025-08-29 15:03:34.316269 | orchestrator | 2025-08-29 15:03:34.316280 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:03:34.316291 | orchestrator | Friday 29 August 2025 15:02:27 +0000 (0:00:00.081) 0:00:52.883 ********* 2025-08-29 15:03:34.316301 | orchestrator | 2025-08-29 15:03:34.316312 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 15:03:34.316323 | orchestrator | Friday 29 August 2025 15:02:27 +0000 (0:00:00.074) 0:00:52.958 ********* 2025-08-29 15:03:34.316333 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:34.316344 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:34.316355 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:34.316366 | orchestrator | 2025-08-29 15:03:34.316377 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:34.316388 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 15:03:34.316399 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:03:34.316410 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:03:34.316421 | orchestrator | 2025-08-29 15:03:34.316432 | orchestrator | 2025-08-29 15:03:34.316443 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:34.316453 | orchestrator | Friday 29 August 2025 15:03:33 +0000 (0:01:06.486) 0:01:59.445 ********* 2025-08-29 15:03:34.316464 | orchestrator | =============================================================================== 2025-08-29 15:03:34.316475 | orchestrator | horizon : Restart horizon container ------------------------------------ 66.49s 2025-08-29 15:03:34.316485 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.46s 2025-08-29 15:03:34.316503 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.80s 2025-08-29 15:03:34.316513 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.41s 2025-08-29 15:03:34.316524 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.29s 2025-08-29 15:03:34.316535 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.17s 2025-08-29 15:03:34.316545 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.87s 2025-08-29 15:03:34.316556 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.80s 2025-08-29 15:03:34.316566 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.78s 2025-08-29 15:03:34.316577 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.43s 2025-08-29 15:03:34.316588 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.36s 2025-08-29 15:03:34.316598 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.31s 2025-08-29 15:03:34.316609 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.01s 2025-08-29 15:03:34.316636 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.97s 2025-08-29 15:03:34.316648 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2025-08-29 15:03:34.316658 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2025-08-29 15:03:34.316669 | orchestrator | horizon : Update policy file name --------------------------------------- 0.67s 2025-08-29 15:03:34.316680 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.65s 2025-08-29 15:03:34.316691 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2025-08-29 15:03:34.316701 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-08-29 15:03:34.316712 | orchestrator | 2025-08-29 15:03:34 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:34.316723 | orchestrator | 2025-08-29 15:03:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:37.347169 | orchestrator | 2025-08-29 15:03:37 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:37.348184 | orchestrator | 2025-08-29 15:03:37 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:37.348215 | orchestrator | 2025-08-29 15:03:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:40.398418 | orchestrator | 2025-08-29 15:03:40 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:40.400124 | orchestrator | 2025-08-29 15:03:40 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:40.400176 | orchestrator | 2025-08-29 15:03:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:43.460101 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:43.463951 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:43.464046 | orchestrator | 2025-08-29 15:03:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:46.509702 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:46.510675 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:46.510723 | orchestrator | 2025-08-29 15:03:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:49.567347 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:49.571027 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:49.571112 | orchestrator | 2025-08-29 15:03:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:52.618592 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:52.620770 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:52.620824 | orchestrator | 2025-08-29 15:03:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:55.673824 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:55.677031 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:55.677098 | orchestrator | 2025-08-29 15:03:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:58.731124 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:03:58.732709 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:03:58.732786 | orchestrator | 2025-08-29 15:03:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:01.784429 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:04:01.784537 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:01.784553 | orchestrator | 2025-08-29 15:04:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:04.833672 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:04:04.835417 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:04.835548 | orchestrator | 2025-08-29 15:04:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:07.885641 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:04:07.886463 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:07.886504 | orchestrator | 2025-08-29 15:04:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:10.933009 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:04:10.933655 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:10.933712 | orchestrator | 2025-08-29 15:04:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:13.971539 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state STARTED 2025-08-29 15:04:13.972304 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:13.972358 | orchestrator | 2025-08-29 15:04:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:17.055996 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task cb8e7840-7c23-4686-8d46-1a2c3976ce6d is in state STARTED 2025-08-29 15:04:17.059492 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:17.060428 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task 7cbfac6b-ac5c-4b30-8769-ca42e9a7a71a is in state SUCCESS 2025-08-29 15:04:17.061124 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:17.063591 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:17.063628 | orchestrator | 2025-08-29 15:04:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:20.127377 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task cb8e7840-7c23-4686-8d46-1a2c3976ce6d is in state STARTED 2025-08-29 15:04:20.127483 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:20.127852 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:20.129262 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:20.129306 | orchestrator | 2025-08-29 15:04:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:23.173850 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task cb8e7840-7c23-4686-8d46-1a2c3976ce6d is in state SUCCESS 2025-08-29 15:04:23.174561 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:23.175449 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:23.177642 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:23.177686 | orchestrator | 2025-08-29 15:04:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:26.310365 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:26.322780 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:26.324996 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:26.329527 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state STARTED 2025-08-29 15:04:26.331870 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:26.331938 | orchestrator | 2025-08-29 15:04:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:29.365231 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:29.366279 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:29.367433 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:29.369439 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task 183c3e2e-fe57-4130-bd39-5f4c08d18c80 is in state SUCCESS 2025-08-29 15:04:29.370084 | orchestrator | 2025-08-29 15:04:29.370111 | orchestrator | 2025-08-29 15:04:29.370120 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 15:04:29.370131 | orchestrator | 2025-08-29 15:04:29.370140 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 15:04:29.370150 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:00.238) 0:00:00.238 ********* 2025-08-29 15:04:29.370160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 15:04:29.370170 | orchestrator | 2025-08-29 15:04:29.370179 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 15:04:29.370213 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:00.289) 0:00:00.528 ********* 2025-08-29 15:04:29.370223 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 15:04:29.370233 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 15:04:29.370242 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 15:04:29.370252 | orchestrator | 2025-08-29 15:04:29.370382 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 15:04:29.370437 | orchestrator | Friday 29 August 2025 15:03:19 +0000 (0:00:01.357) 0:00:01.886 ********* 2025-08-29 15:04:29.370450 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 15:04:29.370459 | orchestrator | 2025-08-29 15:04:29.370468 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 15:04:29.370478 | orchestrator | Friday 29 August 2025 15:03:21 +0000 (0:00:01.284) 0:00:03.170 ********* 2025-08-29 15:04:29.370487 | orchestrator | changed: [testbed-manager] 2025-08-29 15:04:29.370497 | orchestrator | 2025-08-29 15:04:29.370506 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 15:04:29.370658 | orchestrator | Friday 29 August 2025 15:03:22 +0000 (0:00:01.248) 0:00:04.418 ********* 2025-08-29 15:04:29.370668 | orchestrator | changed: [testbed-manager] 2025-08-29 15:04:29.370677 | orchestrator | 2025-08-29 15:04:29.370685 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 15:04:29.370694 | orchestrator | Friday 29 August 2025 15:03:23 +0000 (0:00:01.083) 0:00:05.502 ********* 2025-08-29 15:04:29.370703 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 15:04:29.370711 | orchestrator | ok: [testbed-manager] 2025-08-29 15:04:29.370720 | orchestrator | 2025-08-29 15:04:29.370729 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 15:04:29.370738 | orchestrator | Friday 29 August 2025 15:04:05 +0000 (0:00:41.638) 0:00:47.141 ********* 2025-08-29 15:04:29.370746 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 15:04:29.370756 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 15:04:29.370765 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 15:04:29.370773 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:04:29.370782 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 15:04:29.370790 | orchestrator | 2025-08-29 15:04:29.370799 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 15:04:29.370809 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:04.562) 0:00:51.703 ********* 2025-08-29 15:04:29.370824 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 15:04:29.370838 | orchestrator | 2025-08-29 15:04:29.370858 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 15:04:29.370878 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:00.465) 0:00:52.169 ********* 2025-08-29 15:04:29.370892 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:04:29.370908 | orchestrator | 2025-08-29 15:04:29.370924 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 15:04:29.370940 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:00.115) 0:00:52.285 ********* 2025-08-29 15:04:29.370955 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:04:29.370969 | orchestrator | 2025-08-29 15:04:29.370987 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 15:04:29.371004 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:00.287) 0:00:52.572 ********* 2025-08-29 15:04:29.371016 | orchestrator | changed: [testbed-manager] 2025-08-29 15:04:29.371027 | orchestrator | 2025-08-29 15:04:29.371036 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 15:04:29.371674 | orchestrator | Friday 29 August 2025 15:04:12 +0000 (0:00:01.672) 0:00:54.245 ********* 2025-08-29 15:04:29.371721 | orchestrator | changed: [testbed-manager] 2025-08-29 15:04:29.371767 | orchestrator | 2025-08-29 15:04:29.371781 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 15:04:29.371894 | orchestrator | Friday 29 August 2025 15:04:13 +0000 (0:00:00.750) 0:00:54.996 ********* 2025-08-29 15:04:29.371919 | orchestrator | changed: [testbed-manager] 2025-08-29 15:04:29.371937 | orchestrator | 2025-08-29 15:04:29.371955 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 15:04:29.372160 | orchestrator | Friday 29 August 2025 15:04:13 +0000 (0:00:00.613) 0:00:55.609 ********* 2025-08-29 15:04:29.372194 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 15:04:29.372213 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 15:04:29.372231 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:04:29.372242 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 15:04:29.372253 | orchestrator | 2025-08-29 15:04:29.372263 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:29.372275 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:04:29.372288 | orchestrator | 2025-08-29 15:04:29.372299 | orchestrator | 2025-08-29 15:04:29.372369 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:29.372391 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:01.587) 0:00:57.197 ********* 2025-08-29 15:04:29.372408 | orchestrator | =============================================================================== 2025-08-29 15:04:29.372550 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.64s 2025-08-29 15:04:29.372620 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.56s 2025-08-29 15:04:29.372632 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.67s 2025-08-29 15:04:29.372643 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2025-08-29 15:04:29.372654 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2025-08-29 15:04:29.372664 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.28s 2025-08-29 15:04:29.372675 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.25s 2025-08-29 15:04:29.372686 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.08s 2025-08-29 15:04:29.372709 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2025-08-29 15:04:29.372721 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2025-08-29 15:04:29.372732 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-08-29 15:04:29.372743 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.29s 2025-08-29 15:04:29.372753 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-08-29 15:04:29.372764 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-08-29 15:04:29.372775 | orchestrator | 2025-08-29 15:04:29.372786 | orchestrator | 2025-08-29 15:04:29.372796 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:04:29.372807 | orchestrator | 2025-08-29 15:04:29.372818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:04:29.372829 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.209) 0:00:00.209 ********* 2025-08-29 15:04:29.372840 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.372850 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.372861 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.372872 | orchestrator | 2025-08-29 15:04:29.372883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:04:29.372893 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.361) 0:00:00.571 ********* 2025-08-29 15:04:29.372904 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:04:29.372930 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:04:29.372941 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:04:29.372951 | orchestrator | 2025-08-29 15:04:29.372962 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 15:04:29.372973 | orchestrator | 2025-08-29 15:04:29.372983 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 15:04:29.372994 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:00.801) 0:00:01.373 ********* 2025-08-29 15:04:29.373005 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.373016 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.373027 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.373037 | orchestrator | 2025-08-29 15:04:29.373048 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:29.373059 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:04:29.373071 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:04:29.373082 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:04:29.373093 | orchestrator | 2025-08-29 15:04:29.373104 | orchestrator | 2025-08-29 15:04:29.373115 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:29.373125 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.932) 0:00:02.305 ********* 2025-08-29 15:04:29.373136 | orchestrator | =============================================================================== 2025-08-29 15:04:29.373147 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.93s 2025-08-29 15:04:29.373158 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-08-29 15:04:29.373169 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-08-29 15:04:29.373181 | orchestrator | 2025-08-29 15:04:29.373193 | orchestrator | 2025-08-29 15:04:29.373206 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:04:29.373218 | orchestrator | 2025-08-29 15:04:29.373229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:04:29.373242 | orchestrator | Friday 29 August 2025 15:01:34 +0000 (0:00:00.345) 0:00:00.346 ********* 2025-08-29 15:04:29.373254 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.373267 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.373278 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.373290 | orchestrator | 2025-08-29 15:04:29.373302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:04:29.373314 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.397) 0:00:00.743 ********* 2025-08-29 15:04:29.373326 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:04:29.373339 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:04:29.373351 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:04:29.373363 | orchestrator | 2025-08-29 15:04:29.373374 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 15:04:29.373387 | orchestrator | 2025-08-29 15:04:29.373438 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:29.373453 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.534) 0:00:01.278 ********* 2025-08-29 15:04:29.373466 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:29.373479 | orchestrator | 2025-08-29 15:04:29.373491 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 15:04:29.373502 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:00.662) 0:00:01.941 ********* 2025-08-29 15:04:29.373537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.373555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.373633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.373685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.373701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.373739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.373750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.373761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.373771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.373781 | orchestrator | 2025-08-29 15:04:29.373791 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 15:04:29.373801 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:02.166) 0:00:04.107 ********* 2025-08-29 15:04:29.373811 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 15:04:29.373821 | orchestrator | 2025-08-29 15:04:29.373830 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 15:04:29.373840 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.932) 0:00:05.041 ********* 2025-08-29 15:04:29.373850 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.373859 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.373869 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.373878 | orchestrator | 2025-08-29 15:04:29.373888 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 15:04:29.373898 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.514) 0:00:05.555 ********* 2025-08-29 15:04:29.373915 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:04:29.373926 | orchestrator | 2025-08-29 15:04:29.373942 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:29.373965 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.858) 0:00:06.413 ********* 2025-08-29 15:04:29.373987 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:29.374009 | orchestrator | 2025-08-29 15:04:29.374064 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 15:04:29.374080 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.547) 0:00:06.961 ********* 2025-08-29 15:04:29.374106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374270 | orchestrator | 2025-08-29 15:04:29.374280 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 15:04:29.374289 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:03.793) 0:00:10.754 ********* 2025-08-29 15:04:29.374307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:29.374324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:29.374355 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.374366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:29.374377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:29.374413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:29.374424 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.374439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:29.374460 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.374469 | orchestrator | 2025-08-29 15:04:29.374479 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 15:04:29.374489 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.604) 0:00:11.359 ********* 2025-08-29 15:04:29.374499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:29.374558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:29.374612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.374627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:29.374638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:29.374666 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.374676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:29.374695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:29.374720 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.374730 | orchestrator | 2025-08-29 15:04:29.374740 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 15:04:29.374750 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.835) 0:00:12.194 ********* 2025-08-29 15:04:29.374760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.374881 | orchestrator | 2025-08-29 15:04:29.374891 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 15:04:29.374900 | orchestrator | Friday 29 August 2025 15:01:50 +0000 (0:00:03.696) 0:00:15.891 ********* 2025-08-29 15:04:29.374923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.374979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.374990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.375005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.375016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.375032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.375042 | orchestrator | 2025-08-29 15:04:29.375051 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 15:04:29.375061 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:06.091) 0:00:21.983 ********* 2025-08-29 15:04:29.375071 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.375081 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:29.375090 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:29.375100 | orchestrator | 2025-08-29 15:04:29.375109 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 15:04:29.375119 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:01.577) 0:00:23.560 ********* 2025-08-29 15:04:29.375129 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.375138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.375153 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.375171 | orchestrator | 2025-08-29 15:04:29.375197 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 15:04:29.375215 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.640) 0:00:24.200 ********* 2025-08-29 15:04:29.375236 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.375257 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.375274 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.375289 | orchestrator | 2025-08-29 15:04:29.375306 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 15:04:29.375323 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.327) 0:00:24.528 ********* 2025-08-29 15:04:29.375340 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.375356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.375372 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.375388 | orchestrator | 2025-08-29 15:04:29.375398 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 15:04:29.375408 | orchestrator | Friday 29 August 2025 15:01:59 +0000 (0:00:00.571) 0:00:25.099 ********* 2025-08-29 15:04:29.375435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.375447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.375472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.375482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.375499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.375510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:29.375524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.375541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.375551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.375627 | orchestrator | 2025-08-29 15:04:29.375640 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:29.375650 | orchestrator | Friday 29 August 2025 15:02:02 +0000 (0:00:02.711) 0:00:27.810 ********* 2025-08-29 15:04:29.375660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.375670 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.375679 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.375689 | orchestrator | 2025-08-29 15:04:29.375699 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 15:04:29.375708 | orchestrator | Friday 29 August 2025 15:02:02 +0000 (0:00:00.395) 0:00:28.206 ********* 2025-08-29 15:04:29.375718 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:04:29.375728 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:04:29.375737 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:04:29.375747 | orchestrator | 2025-08-29 15:04:29.375756 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 15:04:29.375764 | orchestrator | Friday 29 August 2025 15:02:04 +0000 (0:00:02.101) 0:00:30.307 ********* 2025-08-29 15:04:29.375772 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:04:29.375780 | orchestrator | 2025-08-29 15:04:29.375788 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 15:04:29.375795 | orchestrator | Friday 29 August 2025 15:02:06 +0000 (0:00:01.857) 0:00:32.164 ********* 2025-08-29 15:04:29.375803 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.375811 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.375819 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.375827 | orchestrator | 2025-08-29 15:04:29.375835 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 15:04:29.375843 | orchestrator | Friday 29 August 2025 15:02:07 +0000 (0:00:00.666) 0:00:32.831 ********* 2025-08-29 15:04:29.375851 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:04:29.375864 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:04:29.375879 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:04:29.375886 | orchestrator | 2025-08-29 15:04:29.375895 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 15:04:29.375902 | orchestrator | Friday 29 August 2025 15:02:08 +0000 (0:00:01.324) 0:00:34.155 ********* 2025-08-29 15:04:29.375910 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.375918 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.375926 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.375933 | orchestrator | 2025-08-29 15:04:29.375941 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 15:04:29.375949 | orchestrator | Friday 29 August 2025 15:02:08 +0000 (0:00:00.378) 0:00:34.533 ********* 2025-08-29 15:04:29.375957 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:04:29.375965 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:04:29.375972 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:04:29.375984 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:04:29.375992 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:04:29.376000 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:04:29.376010 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:04:29.376024 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:04:29.376037 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:04:29.376049 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:04:29.376062 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:04:29.376079 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:04:29.376095 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:04:29.376109 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:04:29.376121 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:04:29.376135 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:04:29.376149 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:04:29.376163 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:04:29.376176 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:04:29.376190 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:04:29.376198 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:04:29.376206 | orchestrator | 2025-08-29 15:04:29.376214 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 15:04:29.376222 | orchestrator | Friday 29 August 2025 15:02:19 +0000 (0:00:10.241) 0:00:44.775 ********* 2025-08-29 15:04:29.376230 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:04:29.376237 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:04:29.376245 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:04:29.376260 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:04:29.376268 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:04:29.376276 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:04:29.376284 | orchestrator | 2025-08-29 15:04:29.376292 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 15:04:29.376300 | orchestrator | Friday 29 August 2025 15:02:21 +0000 (0:00:02.895) 0:00:47.670 ********* 2025-08-29 15:04:29.376317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.376331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.376341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:29.376350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.376364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.376379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:29.376392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.376401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.376409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:29.376417 | orchestrator | 2025-08-29 15:04:29.376425 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:29.376433 | orchestrator | Friday 29 August 2025 15:02:24 +0000 (0:00:02.540) 0:00:50.211 ********* 2025-08-29 15:04:29.376441 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.376449 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.376457 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.376470 | orchestrator | 2025-08-29 15:04:29.376478 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 15:04:29.376485 | orchestrator | Friday 29 August 2025 15:02:24 +0000 (0:00:00.334) 0:00:50.545 ********* 2025-08-29 15:04:29.376493 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.376501 | orchestrator | 2025-08-29 15:04:29.376509 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 15:04:29.376517 | orchestrator | Friday 29 August 2025 15:02:27 +0000 (0:00:02.507) 0:00:53.053 ********* 2025-08-29 15:04:29.376524 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.376532 | orchestrator | 2025-08-29 15:04:29.376540 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 15:04:29.376548 | orchestrator | Friday 29 August 2025 15:02:29 +0000 (0:00:02.354) 0:00:55.408 ********* 2025-08-29 15:04:29.376556 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.376589 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.376597 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.376605 | orchestrator | 2025-08-29 15:04:29.376613 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 15:04:29.376621 | orchestrator | Friday 29 August 2025 15:02:31 +0000 (0:00:01.338) 0:00:56.746 ********* 2025-08-29 15:04:29.376629 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.376637 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.376645 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.376653 | orchestrator | 2025-08-29 15:04:29.376661 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 15:04:29.376668 | orchestrator | Friday 29 August 2025 15:02:31 +0000 (0:00:00.447) 0:00:57.194 ********* 2025-08-29 15:04:29.376676 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.376684 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.376692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.376700 | orchestrator | 2025-08-29 15:04:29.376708 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 15:04:29.376716 | orchestrator | Friday 29 August 2025 15:02:31 +0000 (0:00:00.405) 0:00:57.599 ********* 2025-08-29 15:04:29.376724 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.376732 | orchestrator | 2025-08-29 15:04:29.376740 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 15:04:29.376748 | orchestrator | Friday 29 August 2025 15:02:46 +0000 (0:00:14.128) 0:01:11.728 ********* 2025-08-29 15:04:29.376756 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.376763 | orchestrator | 2025-08-29 15:04:29.376776 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:04:29.376784 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:10.192) 0:01:21.921 ********* 2025-08-29 15:04:29.376792 | orchestrator | 2025-08-29 15:04:29.376800 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:04:29.376808 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.069) 0:01:21.991 ********* 2025-08-29 15:04:29.376816 | orchestrator | 2025-08-29 15:04:29.376824 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:04:29.376832 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.393) 0:01:22.385 ********* 2025-08-29 15:04:29.376840 | orchestrator | 2025-08-29 15:04:29.376848 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 15:04:29.376856 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.101) 0:01:22.486 ********* 2025-08-29 15:04:29.376863 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.376871 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:29.376884 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:29.376897 | orchestrator | 2025-08-29 15:04:29.376910 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 15:04:29.376928 | orchestrator | Friday 29 August 2025 15:03:23 +0000 (0:00:26.381) 0:01:48.868 ********* 2025-08-29 15:04:29.376944 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.376971 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:29.376989 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:29.377005 | orchestrator | 2025-08-29 15:04:29.377018 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 15:04:29.377032 | orchestrator | Friday 29 August 2025 15:03:33 +0000 (0:00:10.114) 0:01:58.983 ********* 2025-08-29 15:04:29.377044 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.377056 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:29.377067 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:29.377081 | orchestrator | 2025-08-29 15:04:29.377093 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:29.377107 | orchestrator | Friday 29 August 2025 15:03:41 +0000 (0:00:07.888) 0:02:06.871 ********* 2025-08-29 15:04:29.377118 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:29.377126 | orchestrator | 2025-08-29 15:04:29.377134 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 15:04:29.377142 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:00.935) 0:02:07.807 ********* 2025-08-29 15:04:29.377151 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.377164 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:29.377177 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:29.377190 | orchestrator | 2025-08-29 15:04:29.377206 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 15:04:29.377224 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:00.855) 0:02:08.663 ********* 2025-08-29 15:04:29.377236 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:29.377249 | orchestrator | 2025-08-29 15:04:29.377261 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 15:04:29.377274 | orchestrator | Friday 29 August 2025 15:03:44 +0000 (0:00:01.822) 0:02:10.488 ********* 2025-08-29 15:04:29.377286 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 15:04:29.377297 | orchestrator | 2025-08-29 15:04:29.377309 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 15:04:29.377323 | orchestrator | Friday 29 August 2025 15:03:55 +0000 (0:00:10.463) 0:02:20.952 ********* 2025-08-29 15:04:29.377336 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 15:04:29.377349 | orchestrator | 2025-08-29 15:04:29.377363 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 15:04:29.377376 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:19.993) 0:02:40.946 ********* 2025-08-29 15:04:29.377390 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 15:04:29.377403 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 15:04:29.377416 | orchestrator | 2025-08-29 15:04:29.377424 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 15:04:29.377432 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:06.434) 0:02:47.380 ********* 2025-08-29 15:04:29.377440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.377448 | orchestrator | 2025-08-29 15:04:29.377456 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 15:04:29.377464 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:00.149) 0:02:47.529 ********* 2025-08-29 15:04:29.377472 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.377480 | orchestrator | 2025-08-29 15:04:29.377487 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 15:04:29.377495 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.459) 0:02:47.989 ********* 2025-08-29 15:04:29.377503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.377511 | orchestrator | 2025-08-29 15:04:29.377518 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 15:04:29.377526 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.138) 0:02:48.128 ********* 2025-08-29 15:04:29.377543 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.377551 | orchestrator | 2025-08-29 15:04:29.377559 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 15:04:29.377593 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.422) 0:02:48.550 ********* 2025-08-29 15:04:29.377601 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:29.377609 | orchestrator | 2025-08-29 15:04:29.377617 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:29.377625 | orchestrator | Friday 29 August 2025 15:04:26 +0000 (0:00:03.621) 0:02:52.172 ********* 2025-08-29 15:04:29.377633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:29.377641 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:29.377649 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:29.377656 | orchestrator | 2025-08-29 15:04:29.377672 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:29.377681 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 15:04:29.377690 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:04:29.377699 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:04:29.377706 | orchestrator | 2025-08-29 15:04:29.377714 | orchestrator | 2025-08-29 15:04:29.377722 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:29.377730 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:01.485) 0:02:53.657 ********* 2025-08-29 15:04:29.377738 | orchestrator | =============================================================================== 2025-08-29 15:04:29.377752 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 26.38s 2025-08-29 15:04:29.377760 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.99s 2025-08-29 15:04:29.377768 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.13s 2025-08-29 15:04:29.377775 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.46s 2025-08-29 15:04:29.377783 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.24s 2025-08-29 15:04:29.377791 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.19s 2025-08-29 15:04:29.377799 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.11s 2025-08-29 15:04:29.377807 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.89s 2025-08-29 15:04:29.377814 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.43s 2025-08-29 15:04:29.377822 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.09s 2025-08-29 15:04:29.377830 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.79s 2025-08-29 15:04:29.377838 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.70s 2025-08-29 15:04:29.377845 | orchestrator | keystone : Creating default user role ----------------------------------- 3.62s 2025-08-29 15:04:29.377853 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.90s 2025-08-29 15:04:29.377861 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.71s 2025-08-29 15:04:29.377869 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.54s 2025-08-29 15:04:29.377876 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.51s 2025-08-29 15:04:29.377884 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.35s 2025-08-29 15:04:29.377892 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.17s 2025-08-29 15:04:29.377900 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.10s 2025-08-29 15:04:29.377914 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:29.377922 | orchestrator | 2025-08-29 15:04:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:32.427235 | orchestrator | 2025-08-29 15:04:32 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:32.427348 | orchestrator | 2025-08-29 15:04:32 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:32.433128 | orchestrator | 2025-08-29 15:04:32 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:32.433218 | orchestrator | 2025-08-29 15:04:32 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:32.435853 | orchestrator | 2025-08-29 15:04:32 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:32.435921 | orchestrator | 2025-08-29 15:04:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:35.480014 | orchestrator | 2025-08-29 15:04:35 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:35.481533 | orchestrator | 2025-08-29 15:04:35 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:35.483330 | orchestrator | 2025-08-29 15:04:35 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:35.485479 | orchestrator | 2025-08-29 15:04:35 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:35.487079 | orchestrator | 2025-08-29 15:04:35 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:35.487691 | orchestrator | 2025-08-29 15:04:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:38.568084 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:38.568176 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:38.568188 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:38.568197 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:38.568204 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:38.568210 | orchestrator | 2025-08-29 15:04:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:41.603047 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:41.606335 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:41.606728 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:41.608500 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:41.610008 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:41.610432 | orchestrator | 2025-08-29 15:04:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:44.653959 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:44.654335 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:44.655214 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:44.656183 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:44.657242 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:44.657263 | orchestrator | 2025-08-29 15:04:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:47.701241 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:47.701347 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:47.703738 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:47.705249 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:47.706471 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:47.706507 | orchestrator | 2025-08-29 15:04:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:50.755979 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:50.758653 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:50.760937 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:50.763969 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:50.765389 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:50.765431 | orchestrator | 2025-08-29 15:04:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:53.821650 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:53.829292 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:53.832911 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:53.835399 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:53.835850 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:53.835900 | orchestrator | 2025-08-29 15:04:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:56.934115 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:56.935458 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:56.937196 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:56.938949 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:56.940089 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:56.940118 | orchestrator | 2025-08-29 15:04:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:59.985014 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:04:59.985142 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:04:59.985159 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:04:59.985171 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:04:59.985182 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:04:59.985193 | orchestrator | 2025-08-29 15:04:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:03.024837 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:05:03.024903 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:03.024913 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:03.024921 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:03.024928 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:03.024936 | orchestrator | 2025-08-29 15:05:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:06.158889 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:05:06.158978 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:06.158994 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:06.159006 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:06.159018 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:06.159029 | orchestrator | 2025-08-29 15:05:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:09.108884 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state STARTED 2025-08-29 15:05:09.108994 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:09.109013 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:09.109027 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:09.109040 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:09.109052 | orchestrator | 2025-08-29 15:05:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:12.141769 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:12.141941 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task d0c99fe9-0c2f-466b-9aeb-e3968c6491ea is in state SUCCESS 2025-08-29 15:05:12.143293 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:12.144409 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:12.145118 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:12.146725 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:12.146791 | orchestrator | 2025-08-29 15:05:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:15.216861 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:15.217609 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:15.218935 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:15.220544 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:15.221591 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:15.221633 | orchestrator | 2025-08-29 15:05:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:18.278811 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:18.278939 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:18.279800 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:18.280611 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:18.281488 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:18.281568 | orchestrator | 2025-08-29 15:05:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:21.325399 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:21.326138 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:21.326932 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:21.328010 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:21.329107 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:21.329162 | orchestrator | 2025-08-29 15:05:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:24.427958 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:24.428055 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:24.428066 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:24.428075 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:24.428084 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:24.428092 | orchestrator | 2025-08-29 15:05:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:27.406786 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:27.407999 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:27.408942 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:27.411830 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:27.412765 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:27.412799 | orchestrator | 2025-08-29 15:05:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:30.464859 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:30.466284 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:30.470662 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:30.472532 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:30.474356 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:30.474379 | orchestrator | 2025-08-29 15:05:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:33.529405 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:33.529483 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:33.531655 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:33.532711 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:33.535304 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:33.535368 | orchestrator | 2025-08-29 15:05:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:36.604328 | orchestrator | 2025-08-29 15:05:36 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:36.763346 | orchestrator | 2025-08-29 15:05:36 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state STARTED 2025-08-29 15:05:36.763430 | orchestrator | 2025-08-29 15:05:36 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:36.763438 | orchestrator | 2025-08-29 15:05:36 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:36.763445 | orchestrator | 2025-08-29 15:05:36 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:36.763452 | orchestrator | 2025-08-29 15:05:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:39.644192 | orchestrator | 2025-08-29 15:05:39 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:39.644285 | orchestrator | 2025-08-29 15:05:39 | INFO  | Task af1537a6-b924-4f00-8ac0-ffe7cce02a51 is in state SUCCESS 2025-08-29 15:05:39.644777 | orchestrator | 2025-08-29 15:05:39.644796 | orchestrator | 2025-08-29 15:05:39.644801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:05:39.644806 | orchestrator | 2025-08-29 15:05:39.644811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:05:39.644816 | orchestrator | Friday 29 August 2025 15:04:30 +0000 (0:00:00.417) 0:00:00.417 ********* 2025-08-29 15:05:39.644820 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:39.644825 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:39.644847 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:39.644851 | orchestrator | ok: [testbed-manager] 2025-08-29 15:05:39.644855 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:39.644859 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:39.644863 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:39.644867 | orchestrator | 2025-08-29 15:05:39.644872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:05:39.644876 | orchestrator | Friday 29 August 2025 15:04:32 +0000 (0:00:01.495) 0:00:01.913 ********* 2025-08-29 15:05:39.644880 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644885 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644889 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644894 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644898 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644902 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644906 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 15:05:39.644910 | orchestrator | 2025-08-29 15:05:39.644914 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:05:39.644918 | orchestrator | 2025-08-29 15:05:39.644925 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 15:05:39.644931 | orchestrator | Friday 29 August 2025 15:04:34 +0000 (0:00:02.115) 0:00:04.028 ********* 2025-08-29 15:05:39.644939 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:05:39.644950 | orchestrator | 2025-08-29 15:05:39.644956 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 15:05:39.644962 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:03.105) 0:00:07.134 ********* 2025-08-29 15:05:39.644968 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-08-29 15:05:39.644974 | orchestrator | 2025-08-29 15:05:39.644980 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 15:05:39.644986 | orchestrator | Friday 29 August 2025 15:04:40 +0000 (0:00:03.343) 0:00:10.478 ********* 2025-08-29 15:05:39.644992 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 15:05:39.645001 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 15:05:39.645008 | orchestrator | 2025-08-29 15:05:39.645013 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 15:05:39.645019 | orchestrator | Friday 29 August 2025 15:04:46 +0000 (0:00:06.101) 0:00:16.580 ********* 2025-08-29 15:05:39.645026 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:05:39.645033 | orchestrator | 2025-08-29 15:05:39.645040 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 15:05:39.645047 | orchestrator | Friday 29 August 2025 15:04:50 +0000 (0:00:03.435) 0:00:20.016 ********* 2025-08-29 15:05:39.645054 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:05:39.645058 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-08-29 15:05:39.645061 | orchestrator | 2025-08-29 15:05:39.645065 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 15:05:39.645069 | orchestrator | Friday 29 August 2025 15:04:53 +0000 (0:00:03.734) 0:00:23.750 ********* 2025-08-29 15:05:39.645073 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:05:39.645077 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-08-29 15:05:39.645081 | orchestrator | 2025-08-29 15:05:39.645096 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 15:05:39.645100 | orchestrator | Friday 29 August 2025 15:05:00 +0000 (0:00:06.182) 0:00:29.932 ********* 2025-08-29 15:05:39.645110 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-08-29 15:05:39.645113 | orchestrator | 2025-08-29 15:05:39.645117 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:39.645121 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645125 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645129 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645133 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645137 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645149 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645153 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645157 | orchestrator | 2025-08-29 15:05:39.645161 | orchestrator | 2025-08-29 15:05:39.645164 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:39.645180 | orchestrator | Friday 29 August 2025 15:05:07 +0000 (0:00:07.518) 0:00:37.451 ********* 2025-08-29 15:05:39.645184 | orchestrator | =============================================================================== 2025-08-29 15:05:39.645188 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.52s 2025-08-29 15:05:39.645191 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.18s 2025-08-29 15:05:39.645195 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.10s 2025-08-29 15:05:39.645199 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.73s 2025-08-29 15:05:39.645202 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.44s 2025-08-29 15:05:39.645206 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.34s 2025-08-29 15:05:39.645210 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.11s 2025-08-29 15:05:39.645214 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.12s 2025-08-29 15:05:39.645217 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.50s 2025-08-29 15:05:39.645221 | orchestrator | 2025-08-29 15:05:39.645225 | orchestrator | 2025-08-29 15:05:39.645228 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 15:05:39.645232 | orchestrator | 2025-08-29 15:05:39.645236 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 15:05:39.645239 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.370) 0:00:00.370 ********* 2025-08-29 15:05:39.645244 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645247 | orchestrator | 2025-08-29 15:05:39.645251 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 15:05:39.645255 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:02.480) 0:00:02.851 ********* 2025-08-29 15:05:39.645259 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645265 | orchestrator | 2025-08-29 15:05:39.645271 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 15:05:39.645343 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:01.210) 0:00:04.062 ********* 2025-08-29 15:05:39.645353 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645359 | orchestrator | 2025-08-29 15:05:39.645364 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 15:05:39.645376 | orchestrator | Friday 29 August 2025 15:04:25 +0000 (0:00:01.319) 0:00:05.381 ********* 2025-08-29 15:05:39.645382 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645388 | orchestrator | 2025-08-29 15:05:39.645394 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 15:05:39.645399 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:02.054) 0:00:07.435 ********* 2025-08-29 15:05:39.645417 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645424 | orchestrator | 2025-08-29 15:05:39.645431 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 15:05:39.645437 | orchestrator | Friday 29 August 2025 15:04:29 +0000 (0:00:01.322) 0:00:08.757 ********* 2025-08-29 15:05:39.645443 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645447 | orchestrator | 2025-08-29 15:05:39.645450 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 15:05:39.645454 | orchestrator | Friday 29 August 2025 15:04:30 +0000 (0:00:01.181) 0:00:09.939 ********* 2025-08-29 15:05:39.645458 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645461 | orchestrator | 2025-08-29 15:05:39.645465 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 15:05:39.645469 | orchestrator | Friday 29 August 2025 15:04:32 +0000 (0:00:02.073) 0:00:12.012 ********* 2025-08-29 15:05:39.645472 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645476 | orchestrator | 2025-08-29 15:05:39.645479 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 15:05:39.645483 | orchestrator | Friday 29 August 2025 15:04:34 +0000 (0:00:01.792) 0:00:13.804 ********* 2025-08-29 15:05:39.645487 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:39.645523 | orchestrator | 2025-08-29 15:05:39.645534 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 15:05:39.645538 | orchestrator | Friday 29 August 2025 15:05:14 +0000 (0:00:40.258) 0:00:54.062 ********* 2025-08-29 15:05:39.645542 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:05:39.645545 | orchestrator | 2025-08-29 15:05:39.645549 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:05:39.645553 | orchestrator | 2025-08-29 15:05:39.645556 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:05:39.645560 | orchestrator | Friday 29 August 2025 15:05:14 +0000 (0:00:00.199) 0:00:54.262 ********* 2025-08-29 15:05:39.645564 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:39.645568 | orchestrator | 2025-08-29 15:05:39.645572 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:05:39.645578 | orchestrator | 2025-08-29 15:05:39.645584 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:05:39.645590 | orchestrator | Friday 29 August 2025 15:05:16 +0000 (0:00:01.703) 0:00:55.965 ********* 2025-08-29 15:05:39.645596 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:05:39.645601 | orchestrator | 2025-08-29 15:05:39.645607 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:05:39.645612 | orchestrator | 2025-08-29 15:05:39.645618 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:05:39.645626 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:11.305) 0:01:07.271 ********* 2025-08-29 15:05:39.645635 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:05:39.645640 | orchestrator | 2025-08-29 15:05:39.645653 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:39.645660 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 15:05:39.645667 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645674 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645684 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:39.645690 | orchestrator | 2025-08-29 15:05:39.645695 | orchestrator | 2025-08-29 15:05:39.645701 | orchestrator | 2025-08-29 15:05:39.645706 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:39.645712 | orchestrator | Friday 29 August 2025 15:05:38 +0000 (0:00:11.166) 0:01:18.438 ********* 2025-08-29 15:05:39.645719 | orchestrator | =============================================================================== 2025-08-29 15:05:39.645724 | orchestrator | Create admin user ------------------------------------------------------ 40.26s 2025-08-29 15:05:39.645730 | orchestrator | Restart ceph manager service ------------------------------------------- 24.18s 2025-08-29 15:05:39.645737 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.48s 2025-08-29 15:05:39.645743 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2025-08-29 15:05:39.645749 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 2.05s 2025-08-29 15:05:39.645756 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.79s 2025-08-29 15:05:39.645762 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.32s 2025-08-29 15:05:39.645768 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.32s 2025-08-29 15:05:39.645774 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.21s 2025-08-29 15:05:39.645781 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.18s 2025-08-29 15:05:39.645785 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2025-08-29 15:05:39.645789 | orchestrator | 2025-08-29 15:05:39 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:39.647024 | orchestrator | 2025-08-29 15:05:39 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:39.649438 | orchestrator | 2025-08-29 15:05:39 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:39.649485 | orchestrator | 2025-08-29 15:05:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:42.693470 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:42.695601 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:42.696618 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:42.699409 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:42.699462 | orchestrator | 2025-08-29 15:05:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:45.778623 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:45.779415 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:45.782417 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:45.783425 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:45.783470 | orchestrator | 2025-08-29 15:05:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:48.892988 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:48.894178 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:48.895259 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:48.896597 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:48.896827 | orchestrator | 2025-08-29 15:05:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:51.970354 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:51.972856 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:51.972911 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:51.975317 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:51.975380 | orchestrator | 2025-08-29 15:05:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:55.011969 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:55.014865 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:55.017723 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:55.022783 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:55.022862 | orchestrator | 2025-08-29 15:05:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:58.070208 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:05:58.071129 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:05:58.073378 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:05:58.074757 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:05:58.075857 | orchestrator | 2025-08-29 15:05:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:01.115899 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:01.116598 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:01.117736 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:01.119342 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:01.119378 | orchestrator | 2025-08-29 15:06:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:04.150277 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:04.152132 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:04.153350 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:04.154621 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:04.154786 | orchestrator | 2025-08-29 15:06:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:07.199403 | orchestrator | 2025-08-29 15:06:07 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:07.202373 | orchestrator | 2025-08-29 15:06:07 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:07.203947 | orchestrator | 2025-08-29 15:06:07 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:07.205564 | orchestrator | 2025-08-29 15:06:07 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:07.206696 | orchestrator | 2025-08-29 15:06:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:10.243049 | orchestrator | 2025-08-29 15:06:10 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:10.280615 | orchestrator | 2025-08-29 15:06:10 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:10.280679 | orchestrator | 2025-08-29 15:06:10 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:10.280689 | orchestrator | 2025-08-29 15:06:10 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:10.280698 | orchestrator | 2025-08-29 15:06:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:13.315652 | orchestrator | 2025-08-29 15:06:13 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:13.317511 | orchestrator | 2025-08-29 15:06:13 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:13.319599 | orchestrator | 2025-08-29 15:06:13 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:13.320889 | orchestrator | 2025-08-29 15:06:13 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:13.321101 | orchestrator | 2025-08-29 15:06:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:16.370682 | orchestrator | 2025-08-29 15:06:16 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:16.373446 | orchestrator | 2025-08-29 15:06:16 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:16.373524 | orchestrator | 2025-08-29 15:06:16 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:16.374907 | orchestrator | 2025-08-29 15:06:16 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:16.374969 | orchestrator | 2025-08-29 15:06:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:19.421724 | orchestrator | 2025-08-29 15:06:19 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:19.422772 | orchestrator | 2025-08-29 15:06:19 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:19.423956 | orchestrator | 2025-08-29 15:06:19 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:19.425136 | orchestrator | 2025-08-29 15:06:19 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:19.425170 | orchestrator | 2025-08-29 15:06:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:22.465328 | orchestrator | 2025-08-29 15:06:22 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:22.467785 | orchestrator | 2025-08-29 15:06:22 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:22.469120 | orchestrator | 2025-08-29 15:06:22 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:22.470472 | orchestrator | 2025-08-29 15:06:22 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:22.470521 | orchestrator | 2025-08-29 15:06:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:25.518087 | orchestrator | 2025-08-29 15:06:25 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:25.518856 | orchestrator | 2025-08-29 15:06:25 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:25.520131 | orchestrator | 2025-08-29 15:06:25 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:25.521178 | orchestrator | 2025-08-29 15:06:25 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:25.521241 | orchestrator | 2025-08-29 15:06:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:28.554078 | orchestrator | 2025-08-29 15:06:28 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:28.559030 | orchestrator | 2025-08-29 15:06:28 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:28.562153 | orchestrator | 2025-08-29 15:06:28 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:28.563408 | orchestrator | 2025-08-29 15:06:28 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:28.563504 | orchestrator | 2025-08-29 15:06:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:31.609722 | orchestrator | 2025-08-29 15:06:31 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:31.609809 | orchestrator | 2025-08-29 15:06:31 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:31.609819 | orchestrator | 2025-08-29 15:06:31 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:31.609827 | orchestrator | 2025-08-29 15:06:31 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:31.609835 | orchestrator | 2025-08-29 15:06:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:34.637999 | orchestrator | 2025-08-29 15:06:34 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:34.638671 | orchestrator | 2025-08-29 15:06:34 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:34.639586 | orchestrator | 2025-08-29 15:06:34 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:34.640724 | orchestrator | 2025-08-29 15:06:34 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:34.640759 | orchestrator | 2025-08-29 15:06:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:37.680125 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:37.681413 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:37.682549 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:37.683420 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:37.683518 | orchestrator | 2025-08-29 15:06:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:40.785621 | orchestrator | 2025-08-29 15:06:40 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:40.787016 | orchestrator | 2025-08-29 15:06:40 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:40.788553 | orchestrator | 2025-08-29 15:06:40 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:40.789750 | orchestrator | 2025-08-29 15:06:40 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:40.789978 | orchestrator | 2025-08-29 15:06:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:43.824487 | orchestrator | 2025-08-29 15:06:43 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:43.825568 | orchestrator | 2025-08-29 15:06:43 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:43.826864 | orchestrator | 2025-08-29 15:06:43 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:43.828301 | orchestrator | 2025-08-29 15:06:43 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:43.828328 | orchestrator | 2025-08-29 15:06:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:46.905277 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:46.906156 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:46.908197 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:46.908979 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:46.909006 | orchestrator | 2025-08-29 15:06:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:49.954352 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:49.955075 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:49.956053 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:49.956903 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:49.957041 | orchestrator | 2025-08-29 15:06:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:52.999490 | orchestrator | 2025-08-29 15:06:52 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:53.001288 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:53.003944 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:53.004002 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:53.004011 | orchestrator | 2025-08-29 15:06:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:56.040460 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:56.041072 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:56.043951 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:56.046208 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:56.046253 | orchestrator | 2025-08-29 15:06:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:59.083589 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:06:59.084332 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:06:59.085455 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:06:59.086715 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:06:59.086764 | orchestrator | 2025-08-29 15:06:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:02.124540 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:02.126341 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:02.128301 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:02.130073 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:02.130112 | orchestrator | 2025-08-29 15:07:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:05.177042 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:05.182795 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:05.187314 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:05.189330 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:05.189382 | orchestrator | 2025-08-29 15:07:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:08.255910 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:08.256533 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:08.257555 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:08.258685 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:08.258737 | orchestrator | 2025-08-29 15:07:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:11.308162 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:11.311555 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:11.313642 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:11.316105 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:11.316138 | orchestrator | 2025-08-29 15:07:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:14.369238 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:14.372671 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:14.375340 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:14.377154 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:14.377229 | orchestrator | 2025-08-29 15:07:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:17.425998 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:17.428196 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:17.429222 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:17.430675 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:17.430723 | orchestrator | 2025-08-29 15:07:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:20.477953 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:20.478953 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:20.480154 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:20.481156 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:20.481188 | orchestrator | 2025-08-29 15:07:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:23.548475 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:23.549372 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:23.554644 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:23.554721 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:23.554732 | orchestrator | 2025-08-29 15:07:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:26.602296 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:26.604365 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:26.605260 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:26.605920 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:26.606090 | orchestrator | 2025-08-29 15:07:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:29.651877 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:29.652618 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:29.653107 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:29.653833 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:29.653873 | orchestrator | 2025-08-29 15:07:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:32.691751 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:32.693064 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:32.695204 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:32.697044 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:32.697705 | orchestrator | 2025-08-29 15:07:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:35.737242 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:35.739092 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:35.741017 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:35.743141 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:35.743228 | orchestrator | 2025-08-29 15:07:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:38.784607 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:38.784696 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:38.785553 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:38.786264 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:38.786322 | orchestrator | 2025-08-29 15:07:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:41.823555 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:41.823897 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:41.824261 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:41.825740 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:41.825792 | orchestrator | 2025-08-29 15:07:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:44.939952 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:44.940184 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:44.943173 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:44.944215 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:44.944274 | orchestrator | 2025-08-29 15:07:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:48.021888 | orchestrator | 2025-08-29 15:07:48 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:48.027810 | orchestrator | 2025-08-29 15:07:48 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:48.034675 | orchestrator | 2025-08-29 15:07:48 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:48.045024 | orchestrator | 2025-08-29 15:07:48 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:48.045535 | orchestrator | 2025-08-29 15:07:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:51.094648 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:51.096265 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:51.097452 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:51.099096 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:51.099144 | orchestrator | 2025-08-29 15:07:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:54.184774 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:54.226671 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:54.226748 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:54.226756 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:54.226762 | orchestrator | 2025-08-29 15:07:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:57.271904 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:07:57.272946 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:07:57.275439 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:07:57.279973 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:07:57.280813 | orchestrator | 2025-08-29 15:07:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:00.372231 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:00.372520 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:00.374433 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:00.374739 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:00.374756 | orchestrator | 2025-08-29 15:08:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:03.416587 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:03.420330 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:03.424348 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:03.427578 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:03.428047 | orchestrator | 2025-08-29 15:08:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:06.508072 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:06.508695 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:06.510464 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:06.510835 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:06.510919 | orchestrator | 2025-08-29 15:08:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:09.554465 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:09.555100 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:09.556772 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:09.559543 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:09.559607 | orchestrator | 2025-08-29 15:08:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:12.631855 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:12.632566 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:12.633242 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:12.634218 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:12.634238 | orchestrator | 2025-08-29 15:08:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:15.684667 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:15.689269 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:15.691365 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:15.693717 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:15.699084 | orchestrator | 2025-08-29 15:08:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:18.756842 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:18.762986 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:18.764109 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:18.769987 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:18.770113 | orchestrator | 2025-08-29 15:08:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:21.818858 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:21.819156 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:21.819877 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:21.820924 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:21.820970 | orchestrator | 2025-08-29 15:08:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:24.859937 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:24.861341 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:24.863086 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:24.865270 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:24.865300 | orchestrator | 2025-08-29 15:08:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:27.910586 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:27.916066 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:27.920984 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:27.921034 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state STARTED 2025-08-29 15:08:27.921376 | orchestrator | 2025-08-29 15:08:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:30.962436 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:30.962500 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:30.963228 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:30.964859 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:30.966248 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task 16fedadc-9b20-4e62-87d6-070a8b4542ce is in state SUCCESS 2025-08-29 15:08:30.966538 | orchestrator | 2025-08-29 15:08:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:30.967728 | orchestrator | 2025-08-29 15:08:30.967748 | orchestrator | 2025-08-29 15:08:30.967756 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:08:30.967763 | orchestrator | 2025-08-29 15:08:30.967770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:08:30.967777 | orchestrator | Friday 29 August 2025 15:04:31 +0000 (0:00:00.401) 0:00:00.401 ********* 2025-08-29 15:08:30.967784 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:30.967792 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:30.967798 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:30.967805 | orchestrator | 2025-08-29 15:08:30.967811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:08:30.967818 | orchestrator | Friday 29 August 2025 15:04:31 +0000 (0:00:00.424) 0:00:00.826 ********* 2025-08-29 15:08:30.967825 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 15:08:30.967833 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 15:08:30.967839 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 15:08:30.967845 | orchestrator | 2025-08-29 15:08:30.967852 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 15:08:30.967859 | orchestrator | 2025-08-29 15:08:30.967865 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:30.967872 | orchestrator | Friday 29 August 2025 15:04:32 +0000 (0:00:00.598) 0:00:01.424 ********* 2025-08-29 15:08:30.967878 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:30.967884 | orchestrator | 2025-08-29 15:08:30.967891 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 15:08:30.967907 | orchestrator | Friday 29 August 2025 15:04:33 +0000 (0:00:01.259) 0:00:02.684 ********* 2025-08-29 15:08:30.967914 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 15:08:30.967920 | orchestrator | 2025-08-29 15:08:30.967927 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 15:08:30.967960 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:03.909) 0:00:06.593 ********* 2025-08-29 15:08:30.967967 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 15:08:30.967974 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 15:08:30.967980 | orchestrator | 2025-08-29 15:08:30.967987 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 15:08:30.967993 | orchestrator | Friday 29 August 2025 15:04:43 +0000 (0:00:06.201) 0:00:12.795 ********* 2025-08-29 15:08:30.968000 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 15:08:30.968006 | orchestrator | 2025-08-29 15:08:30.968012 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 15:08:30.968019 | orchestrator | Friday 29 August 2025 15:04:46 +0000 (0:00:02.960) 0:00:15.756 ********* 2025-08-29 15:08:30.968025 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:08:30.968032 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 15:08:30.968038 | orchestrator | 2025-08-29 15:08:30.968044 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 15:08:30.968051 | orchestrator | Friday 29 August 2025 15:04:50 +0000 (0:00:04.100) 0:00:19.856 ********* 2025-08-29 15:08:30.968057 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:08:30.968064 | orchestrator | 2025-08-29 15:08:30.968070 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 15:08:30.968077 | orchestrator | Friday 29 August 2025 15:04:54 +0000 (0:00:03.658) 0:00:23.514 ********* 2025-08-29 15:08:30.968083 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 15:08:30.968089 | orchestrator | 2025-08-29 15:08:30.968096 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 15:08:30.968102 | orchestrator | Friday 29 August 2025 15:04:58 +0000 (0:00:04.078) 0:00:27.592 ********* 2025-08-29 15:08:30.968121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968155 | orchestrator | 2025-08-29 15:08:30.968161 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:30.968168 | orchestrator | Friday 29 August 2025 15:05:08 +0000 (0:00:10.280) 0:00:37.873 ********* 2025-08-29 15:08:30.968177 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:30.968184 | orchestrator | 2025-08-29 15:08:30.968190 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 15:08:30.968196 | orchestrator | Friday 29 August 2025 15:05:12 +0000 (0:00:03.626) 0:00:41.500 ********* 2025-08-29 15:08:30.968213 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:30.968225 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:30.968235 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.968241 | orchestrator | 2025-08-29 15:08:30.968248 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 15:08:30.968254 | orchestrator | Friday 29 August 2025 15:05:22 +0000 (0:00:10.169) 0:00:51.670 ********* 2025-08-29 15:08:30.968261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:30.968267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:30.968273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:30.968279 | orchestrator | 2025-08-29 15:08:30.968285 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 15:08:30.968290 | orchestrator | Friday 29 August 2025 15:05:24 +0000 (0:00:02.249) 0:00:53.919 ********* 2025-08-29 15:08:30.968296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:30.968305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:30.968311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:30.968318 | orchestrator | 2025-08-29 15:08:30.968324 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:08:30.968330 | orchestrator | Friday 29 August 2025 15:05:26 +0000 (0:00:01.560) 0:00:55.479 ********* 2025-08-29 15:08:30.968336 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:30.968342 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:30.968347 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:30.968353 | orchestrator | 2025-08-29 15:08:30.968360 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 15:08:30.968366 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:01.306) 0:00:56.786 ********* 2025-08-29 15:08:30.968372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968390 | orchestrator | 2025-08-29 15:08:30.968396 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 15:08:30.968403 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:00.293) 0:00:57.079 ********* 2025-08-29 15:08:30.968409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968415 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968420 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968426 | orchestrator | 2025-08-29 15:08:30.968432 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:30.968438 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.355) 0:00:57.435 ********* 2025-08-29 15:08:30.968444 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:30.968450 | orchestrator | 2025-08-29 15:08:30.968455 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 15:08:30.968462 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.632) 0:00:58.067 ********* 2025-08-29 15:08:30.968474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968515 | orchestrator | 2025-08-29 15:08:30.968522 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 15:08:30.968529 | orchestrator | Friday 29 August 2025 15:05:36 +0000 (0:00:07.367) 0:01:05.435 ********* 2025-08-29 15:08:30.968544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:30.968552 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:30.968571 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:30.968591 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968597 | orchestrator | 2025-08-29 15:08:30.968604 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 15:08:30.968611 | orchestrator | Friday 29 August 2025 15:05:42 +0000 (0:00:06.555) 0:01:11.991 ********* 2025-08-29 15:08:30.968621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:30.968628 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:30.968651 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:30.968669 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968676 | orchestrator | 2025-08-29 15:08:30.968682 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 15:08:30.968689 | orchestrator | Friday 29 August 2025 15:05:50 +0000 (0:00:08.059) 0:01:20.050 ********* 2025-08-29 15:08:30.968696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968703 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968710 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968717 | orchestrator | 2025-08-29 15:08:30.968724 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 15:08:30.968735 | orchestrator | Friday 29 August 2025 15:05:57 +0000 (0:00:06.513) 0:01:26.564 ********* 2025-08-29 15:08:30.968746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.968774 | orchestrator | 2025-08-29 15:08:30.968781 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 15:08:30.968788 | orchestrator | Friday 29 August 2025 15:06:03 +0000 (0:00:06.688) 0:01:33.253 ********* 2025-08-29 15:08:30.968794 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:30.968801 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.968808 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:30.968815 | orchestrator | 2025-08-29 15:08:30.968821 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 15:08:30.968831 | orchestrator | Friday 29 August 2025 15:06:19 +0000 (0:00:15.877) 0:01:49.131 ********* 2025-08-29 15:08:30.968837 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968844 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968851 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968857 | orchestrator | 2025-08-29 15:08:30.968864 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 15:08:30.968871 | orchestrator | Friday 29 August 2025 15:06:27 +0000 (0:00:07.418) 0:01:56.549 ********* 2025-08-29 15:08:30.968877 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968884 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968890 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968896 | orchestrator | 2025-08-29 15:08:30.968903 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 15:08:30.968909 | orchestrator | Friday 29 August 2025 15:06:35 +0000 (0:00:08.259) 0:02:04.809 ********* 2025-08-29 15:08:30.968915 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968929 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968935 | orchestrator | 2025-08-29 15:08:30.968942 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 15:08:30.968948 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:06.839) 0:02:11.648 ********* 2025-08-29 15:08:30.968954 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.968961 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.968967 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.968974 | orchestrator | 2025-08-29 15:08:30.968981 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 15:08:30.968987 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:06.471) 0:02:18.120 ********* 2025-08-29 15:08:30.968996 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.969002 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.969009 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.969016 | orchestrator | 2025-08-29 15:08:30.969022 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 15:08:30.969032 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:00.378) 0:02:18.498 ********* 2025-08-29 15:08:30.969038 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:08:30.969044 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.969050 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:08:30.969057 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.969063 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:08:30.969069 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.969075 | orchestrator | 2025-08-29 15:08:30.969081 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 15:08:30.969088 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:06.012) 0:02:24.511 ********* 2025-08-29 15:08:30.969094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.969108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.969119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:30.969127 | orchestrator | 2025-08-29 15:08:30.969133 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:30.969140 | orchestrator | Friday 29 August 2025 15:07:00 +0000 (0:00:05.143) 0:02:29.655 ********* 2025-08-29 15:08:30.969146 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:30.969152 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:30.969158 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:30.969165 | orchestrator | 2025-08-29 15:08:30.969171 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 15:08:30.969178 | orchestrator | Friday 29 August 2025 15:07:00 +0000 (0:00:00.421) 0:02:30.076 ********* 2025-08-29 15:08:30.969184 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.969191 | orchestrator | 2025-08-29 15:08:30.969197 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 15:08:30.969204 | orchestrator | Friday 29 August 2025 15:07:02 +0000 (0:00:02.036) 0:02:32.113 ********* 2025-08-29 15:08:30.969210 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.969217 | orchestrator | 2025-08-29 15:08:30.969223 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 15:08:30.969229 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:02.211) 0:02:34.324 ********* 2025-08-29 15:08:30.969236 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.969243 | orchestrator | 2025-08-29 15:08:30.969249 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 15:08:30.969259 | orchestrator | Friday 29 August 2025 15:07:07 +0000 (0:00:02.259) 0:02:36.584 ********* 2025-08-29 15:08:30.969265 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.969272 | orchestrator | 2025-08-29 15:08:30.969279 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 15:08:30.969286 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:27.243) 0:03:03.827 ********* 2025-08-29 15:08:30.969301 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.969308 | orchestrator | 2025-08-29 15:08:30.969315 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:08:30.969322 | orchestrator | Friday 29 August 2025 15:07:36 +0000 (0:00:02.137) 0:03:05.965 ********* 2025-08-29 15:08:30.969329 | orchestrator | 2025-08-29 15:08:30.969335 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:08:30.969342 | orchestrator | Friday 29 August 2025 15:07:36 +0000 (0:00:00.363) 0:03:06.329 ********* 2025-08-29 15:08:30.969349 | orchestrator | 2025-08-29 15:08:30.969356 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:08:30.969362 | orchestrator | Friday 29 August 2025 15:07:37 +0000 (0:00:00.084) 0:03:06.413 ********* 2025-08-29 15:08:30.969369 | orchestrator | 2025-08-29 15:08:30.969423 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 15:08:30.969433 | orchestrator | Friday 29 August 2025 15:07:37 +0000 (0:00:00.098) 0:03:06.512 ********* 2025-08-29 15:08:30.969440 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:30.969447 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:30.969454 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:30.969460 | orchestrator | 2025-08-29 15:08:30.969467 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:08:30.969479 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:30.969487 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:30.969494 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:30.969501 | orchestrator | 2025-08-29 15:08:30.969508 | orchestrator | 2025-08-29 15:08:30.969515 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:08:30.969522 | orchestrator | Friday 29 August 2025 15:08:28 +0000 (0:00:50.868) 0:03:57.381 ********* 2025-08-29 15:08:30.969529 | orchestrator | =============================================================================== 2025-08-29 15:08:30.969536 | orchestrator | glance : Restart glance-api container ---------------------------------- 50.87s 2025-08-29 15:08:30.969543 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.24s 2025-08-29 15:08:30.969551 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 15.88s 2025-08-29 15:08:30.969558 | orchestrator | glance : Ensuring config directories exist ----------------------------- 10.28s 2025-08-29 15:08:30.969566 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 10.17s 2025-08-29 15:08:30.969574 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 8.26s 2025-08-29 15:08:30.969582 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 8.06s 2025-08-29 15:08:30.969589 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 7.42s 2025-08-29 15:08:30.969597 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.37s 2025-08-29 15:08:30.969605 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.84s 2025-08-29 15:08:30.969612 | orchestrator | glance : Copying over config.json files for services -------------------- 6.69s 2025-08-29 15:08:30.969620 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.56s 2025-08-29 15:08:30.969627 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.51s 2025-08-29 15:08:30.969634 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.47s 2025-08-29 15:08:30.969642 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.20s 2025-08-29 15:08:30.969654 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.01s 2025-08-29 15:08:30.969661 | orchestrator | glance : Check glance containers ---------------------------------------- 5.14s 2025-08-29 15:08:30.969669 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.10s 2025-08-29 15:08:30.969676 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.08s 2025-08-29 15:08:30.969683 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.91s 2025-08-29 15:08:33.998890 | orchestrator | 2025-08-29 15:08:34 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:34.001117 | orchestrator | 2025-08-29 15:08:34 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:34.003250 | orchestrator | 2025-08-29 15:08:34 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:34.006594 | orchestrator | 2025-08-29 15:08:34 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:34.006650 | orchestrator | 2025-08-29 15:08:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:37.045763 | orchestrator | 2025-08-29 15:08:37 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:37.046675 | orchestrator | 2025-08-29 15:08:37 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:37.047349 | orchestrator | 2025-08-29 15:08:37 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:37.049039 | orchestrator | 2025-08-29 15:08:37 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:37.049080 | orchestrator | 2025-08-29 15:08:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:40.090579 | orchestrator | 2025-08-29 15:08:40 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:40.091208 | orchestrator | 2025-08-29 15:08:40 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:40.092529 | orchestrator | 2025-08-29 15:08:40 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:40.094446 | orchestrator | 2025-08-29 15:08:40 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:40.094515 | orchestrator | 2025-08-29 15:08:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:43.152319 | orchestrator | 2025-08-29 15:08:43 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:43.154197 | orchestrator | 2025-08-29 15:08:43 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:43.155844 | orchestrator | 2025-08-29 15:08:43 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:43.157289 | orchestrator | 2025-08-29 15:08:43 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:43.157541 | orchestrator | 2025-08-29 15:08:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:46.193296 | orchestrator | 2025-08-29 15:08:46 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:46.193443 | orchestrator | 2025-08-29 15:08:46 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:46.195732 | orchestrator | 2025-08-29 15:08:46 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:46.195784 | orchestrator | 2025-08-29 15:08:46 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:46.195797 | orchestrator | 2025-08-29 15:08:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:49.229239 | orchestrator | 2025-08-29 15:08:49 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:49.230960 | orchestrator | 2025-08-29 15:08:49 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:49.233710 | orchestrator | 2025-08-29 15:08:49 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:49.235296 | orchestrator | 2025-08-29 15:08:49 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:49.235638 | orchestrator | 2025-08-29 15:08:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:52.286775 | orchestrator | 2025-08-29 15:08:52 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:52.288914 | orchestrator | 2025-08-29 15:08:52 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:52.289813 | orchestrator | 2025-08-29 15:08:52 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:52.291430 | orchestrator | 2025-08-29 15:08:52 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state STARTED 2025-08-29 15:08:52.291473 | orchestrator | 2025-08-29 15:08:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:55.340702 | orchestrator | 2025-08-29 15:08:55 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:55.342715 | orchestrator | 2025-08-29 15:08:55 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:55.345130 | orchestrator | 2025-08-29 15:08:55 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:08:55.347196 | orchestrator | 2025-08-29 15:08:55 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:55.353254 | orchestrator | 2025-08-29 15:08:55 | INFO  | Task 2ba8c2dd-4688-4ef8-ad5b-d3ee2bea485a is in state SUCCESS 2025-08-29 15:08:55.354852 | orchestrator | 2025-08-29 15:08:55.354897 | orchestrator | 2025-08-29 15:08:55.354904 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:08:55.354910 | orchestrator | 2025-08-29 15:08:55.354914 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:08:55.354919 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.334) 0:00:00.334 ********* 2025-08-29 15:08:55.354923 | orchestrator | ok: [testbed-manager] 2025-08-29 15:08:55.354928 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:55.354933 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:55.354936 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:55.354940 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:08:55.354944 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:08:55.354948 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:08:55.354951 | orchestrator | 2025-08-29 15:08:55.354955 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:08:55.354959 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:01.047) 0:00:01.382 ********* 2025-08-29 15:08:55.354963 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354968 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354971 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354975 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354979 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354983 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354986 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 15:08:55.354990 | orchestrator | 2025-08-29 15:08:55.354994 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 15:08:55.355012 | orchestrator | 2025-08-29 15:08:55.355016 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:08:55.355029 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.886) 0:00:02.268 ********* 2025-08-29 15:08:55.355035 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:08:55.355044 | orchestrator | 2025-08-29 15:08:55.355050 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 15:08:55.355057 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:02.322) 0:00:04.591 ********* 2025-08-29 15:08:55.355069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355087 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:55.355095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355168 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355288 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:55.355298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355324 | orchestrator | 2025-08-29 15:08:55.355328 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:08:55.355332 | orchestrator | Friday 29 August 2025 15:04:29 +0000 (0:00:04.615) 0:00:09.206 ********* 2025-08-29 15:08:55.355336 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:08:55.355341 | orchestrator | 2025-08-29 15:08:55.355345 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 15:08:55.355349 | orchestrator | Friday 29 August 2025 15:04:31 +0000 (0:00:02.620) 0:00:11.827 ********* 2025-08-29 15:08:55.355353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355391 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:55.355399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355426 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.355437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355469 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355668 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.355696 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:55.355855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.355876 | orchestrator | 2025-08-29 15:08:55.355881 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 15:08:55.355885 | orchestrator | Friday 29 August 2025 15:04:39 +0000 (0:00:07.904) 0:00:19.731 ********* 2025-08-29 15:08:55.355889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:08:55.355893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.355902 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.355910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:08:55.355915 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355922 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.355927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.355931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.355946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.355957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.355972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.355984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.355994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.355998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356002 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.356006 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.356010 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.356016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356033 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.356039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356067 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.356072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356095 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.356163 | orchestrator | 2025-08-29 15:08:55.356168 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 15:08:55.356172 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:02.112) 0:00:21.844 ********* 2025-08-29 15:08:55.356176 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:08:55.356181 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:08:55.356226 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356244 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.356248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356477 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.356483 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.356489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:55.356545 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.356552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.356564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.356576 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.356582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.357034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.357053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.357060 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.357067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:55.357088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.357094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:55.357100 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.357106 | orchestrator | 2025-08-29 15:08:55.357112 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 15:08:55.357118 | orchestrator | Friday 29 August 2025 15:04:44 +0000 (0:00:02.352) 0:00:24.196 ********* 2025-08-29 15:08:55.357124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:55.357149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357206 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.357213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357248 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:55.357288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.357303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357313 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.357322 | orchestrator | 2025-08-29 15:08:55.357326 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 15:08:55.357349 | orchestrator | Friday 29 August 2025 15:04:51 +0000 (0:00:06.847) 0:00:31.044 ********* 2025-08-29 15:08:55.357355 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:55.357361 | orchestrator | 2025-08-29 15:08:55.357400 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 15:08:55.357406 | orchestrator | Friday 29 August 2025 15:04:52 +0000 (0:00:01.431) 0:00:32.476 ********* 2025-08-29 15:08:55.357412 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357419 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.357445 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357456 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357462 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357468 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357475 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357496 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357504 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094690, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5563567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357511 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357515 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357523 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357527 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357582 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357589 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357593 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357597 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357601 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357605 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094700, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5610385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.357617 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357621 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357632 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357636 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357640 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357648 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357658 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357663 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357670 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357675 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357679 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094687, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.357684 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357698 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357702 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357707 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357714 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357723 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357731 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357738 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357743 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357751 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357760 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357766 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357773 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357784 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357795 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357802 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357808 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094696, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5589912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.357819 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357826 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357837 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357841 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357845 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357856 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357863 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357867 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357873 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357877 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357881 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357888 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094683, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.357893 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357899 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357903 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357914 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357918 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357925 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdi2025-08-29 15:08:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:55.357976 | orchestrator | r': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357988 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357995 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.357999 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358003 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358007 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358050 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358056 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358063 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358071 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358075 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358079 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094691, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5568295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358083 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358090 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358094 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358101 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358108 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358112 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358118 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358125 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358136 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358146 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358161 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094695, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358167 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358173 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358178 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.358185 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358191 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.358198 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358208 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358214 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358230 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358237 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358243 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358247 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358251 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358255 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.358261 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094692, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5571313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358270 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358276 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358280 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358284 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358288 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.358292 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358296 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.358300 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358306 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:55.358315 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.358319 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094689, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.554969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358325 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094699, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094681, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.551888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358333 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094707, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.563084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358337 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094698, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5599692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358341 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094685, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.552969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358348 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094682, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5523098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358355 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094694, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.557897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358361 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094693, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5576324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358384 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094706, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.562354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:55.358390 | orchestrator | 2025-08-29 15:08:55.358396 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 15:08:55.358402 | orchestrator | Friday 29 August 2025 15:05:37 +0000 (0:00:44.987) 0:01:17.463 ********* 2025-08-29 15:08:55.358408 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:55.358414 | orchestrator | 2025-08-29 15:08:55.358420 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 15:08:55.358426 | orchestrator | Friday 29 August 2025 15:05:38 +0000 (0:00:01.206) 0:01:18.670 ********* 2025-08-29 15:08:55.358432 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358438 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358444 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358456 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358462 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:08:55.358467 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358473 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358478 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358484 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358490 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358497 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:08:55.358504 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358510 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358522 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358529 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358535 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358542 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358549 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358556 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358564 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358571 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358577 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358584 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358596 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358607 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358613 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358624 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358635 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358641 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.358646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358653 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:55.358659 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:55.358665 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 15:08:55.358671 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:55.358677 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:08:55.358683 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:08:55.358689 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:08:55.358695 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:08:55.358701 | orchestrator | 2025-08-29 15:08:55.358713 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 15:08:55.358718 | orchestrator | Friday 29 August 2025 15:05:44 +0000 (0:00:05.341) 0:01:24.012 ********* 2025-08-29 15:08:55.358722 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:55.358726 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.358730 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:55.358734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.358738 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:55.358742 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.358745 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:55.358749 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.358753 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:55.358757 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.358761 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:55.358765 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.358768 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 15:08:55.358777 | orchestrator | 2025-08-29 15:08:55.358780 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 15:08:55.358784 | orchestrator | Friday 29 August 2025 15:06:22 +0000 (0:00:38.090) 0:02:02.103 ********* 2025-08-29 15:08:55.358788 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:55.358792 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.358796 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:55.358800 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.358803 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:55.358807 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.358811 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:55.358815 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.358818 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:55.358822 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.358826 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:55.358830 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.358833 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 15:08:55.358837 | orchestrator | 2025-08-29 15:08:55.358841 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 15:08:55.358845 | orchestrator | Friday 29 August 2025 15:06:28 +0000 (0:00:06.031) 0:02:08.134 ********* 2025-08-29 15:08:55.358848 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:55.358852 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.358856 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:55.358860 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 15:08:55.358864 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.358868 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:55.358875 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.358880 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:55.358883 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.358887 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:55.358891 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.358895 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:55.358899 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.358903 | orchestrator | 2025-08-29 15:08:55.358907 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 15:08:55.358911 | orchestrator | Friday 29 August 2025 15:06:33 +0000 (0:00:04.935) 0:02:13.069 ********* 2025-08-29 15:08:55.358915 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:55.358918 | orchestrator | 2025-08-29 15:08:55.358922 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 15:08:55.358926 | orchestrator | Friday 29 August 2025 15:06:34 +0000 (0:00:01.030) 0:02:14.100 ********* 2025-08-29 15:08:55.358934 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.358938 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.358942 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.358945 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.358952 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.358956 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.358960 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.358963 | orchestrator | 2025-08-29 15:08:55.358967 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 15:08:55.358971 | orchestrator | Friday 29 August 2025 15:06:35 +0000 (0:00:00.880) 0:02:14.980 ********* 2025-08-29 15:08:55.358975 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.358978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.358982 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.358986 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.358991 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:55.358995 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:55.358999 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:55.359003 | orchestrator | 2025-08-29 15:08:55.359007 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 15:08:55.359011 | orchestrator | Friday 29 August 2025 15:06:38 +0000 (0:00:03.743) 0:02:18.724 ********* 2025-08-29 15:08:55.359015 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359019 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.359022 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359026 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.359030 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359035 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359041 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.359047 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.359053 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359059 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.359065 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359071 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.359078 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:55.359084 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.359090 | orchestrator | 2025-08-29 15:08:55.359097 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 15:08:55.359103 | orchestrator | Friday 29 August 2025 15:06:41 +0000 (0:00:03.116) 0:02:21.841 ********* 2025-08-29 15:08:55.359107 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:55.359111 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.359115 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:55.359119 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.359123 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:55.359127 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.359130 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:55.359134 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.359138 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 15:08:55.359146 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:55.359150 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.359154 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:55.359158 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.359162 | orchestrator | 2025-08-29 15:08:55.359166 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 15:08:55.359174 | orchestrator | Friday 29 August 2025 15:06:44 +0000 (0:00:02.345) 0:02:24.187 ********* 2025-08-29 15:08:55.359177 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:55.359181 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 15:08:55.359185 | orchestrator | due to this access issue: 2025-08-29 15:08:55.359189 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 15:08:55.359193 | orchestrator | not a directory 2025-08-29 15:08:55.359197 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:55.359201 | orchestrator | 2025-08-29 15:08:55.359205 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 15:08:55.359208 | orchestrator | Friday 29 August 2025 15:06:46 +0000 (0:00:02.293) 0:02:26.481 ********* 2025-08-29 15:08:55.359212 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.359216 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.359220 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.359223 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.359227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.359231 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.359235 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.359239 | orchestrator | 2025-08-29 15:08:55.359242 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 15:08:55.359246 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:01.408) 0:02:27.889 ********* 2025-08-29 15:08:55.359250 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.359253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:55.359257 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:55.359264 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:55.359267 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:55.359271 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:55.359275 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:55.359279 | orchestrator | 2025-08-29 15:08:55.359282 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 15:08:55.359286 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:01.186) 0:02:29.075 ********* 2025-08-29 15:08:55.359291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359297 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:55.359305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:55.359353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359361 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359415 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:55.359420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:55.359457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359465 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:55.359480 | orchestrator | 2025-08-29 15:08:55.359484 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 15:08:55.359488 | orchestrator | Friday 29 August 2025 15:06:54 +0000 (0:00:05.124) 0:02:34.199 ********* 2025-08-29 15:08:55.359492 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 15:08:55.359496 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:55.359500 | orchestrator | 2025-08-29 15:08:55.359503 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359507 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:01.357) 0:02:35.557 ********* 2025-08-29 15:08:55.359514 | orchestrator | 2025-08-29 15:08:55.359518 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359522 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:00.069) 0:02:35.626 ********* 2025-08-29 15:08:55.359526 | orchestrator | 2025-08-29 15:08:55.359529 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359533 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:00.075) 0:02:35.702 ********* 2025-08-29 15:08:55.359537 | orchestrator | 2025-08-29 15:08:55.359541 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359545 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.264) 0:02:35.966 ********* 2025-08-29 15:08:55.359548 | orchestrator | 2025-08-29 15:08:55.359552 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359556 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.067) 0:02:36.034 ********* 2025-08-29 15:08:55.359560 | orchestrator | 2025-08-29 15:08:55.359564 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359568 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.093) 0:02:36.128 ********* 2025-08-29 15:08:55.359571 | orchestrator | 2025-08-29 15:08:55.359575 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:55.359579 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.128) 0:02:36.257 ********* 2025-08-29 15:08:55.359583 | orchestrator | 2025-08-29 15:08:55.359587 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 15:08:55.359590 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.169) 0:02:36.427 ********* 2025-08-29 15:08:55.359594 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:55.359598 | orchestrator | 2025-08-29 15:08:55.359602 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 15:08:55.359605 | orchestrator | Friday 29 August 2025 15:07:21 +0000 (0:00:24.755) 0:03:01.182 ********* 2025-08-29 15:08:55.359609 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:55.359613 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:55.359617 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:08:55.359620 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:55.359624 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:55.359628 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:08:55.359632 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:08:55.359636 | orchestrator | 2025-08-29 15:08:55.359639 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 15:08:55.359643 | orchestrator | Friday 29 August 2025 15:07:37 +0000 (0:00:16.192) 0:03:17.374 ********* 2025-08-29 15:08:55.359647 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:55.359651 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:55.359654 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:55.359658 | orchestrator | 2025-08-29 15:08:55.359662 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 15:08:55.359666 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:12.320) 0:03:29.695 ********* 2025-08-29 15:08:55.359670 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:55.359674 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:55.359677 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:55.359682 | orchestrator | 2025-08-29 15:08:55.359685 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 15:08:55.359689 | orchestrator | Friday 29 August 2025 15:08:04 +0000 (0:00:14.682) 0:03:44.378 ********* 2025-08-29 15:08:55.359693 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:55.359699 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:55.359706 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:55.359711 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:08:55.359721 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:55.359727 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:08:55.359737 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:08:55.359743 | orchestrator | 2025-08-29 15:08:55.359748 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 15:08:55.359754 | orchestrator | Friday 29 August 2025 15:08:23 +0000 (0:00:19.102) 0:04:03.480 ********* 2025-08-29 15:08:55.359760 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:55.359766 | orchestrator | 2025-08-29 15:08:55.359773 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 15:08:55.359780 | orchestrator | Friday 29 August 2025 15:08:31 +0000 (0:00:07.749) 0:04:11.229 ********* 2025-08-29 15:08:55.359786 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:55.359792 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:55.359798 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:55.359804 | orchestrator | 2025-08-29 15:08:55.359810 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 15:08:55.359814 | orchestrator | Friday 29 August 2025 15:08:36 +0000 (0:00:05.171) 0:04:16.401 ********* 2025-08-29 15:08:55.359818 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:55.359821 | orchestrator | 2025-08-29 15:08:55.359825 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 15:08:55.359829 | orchestrator | Friday 29 August 2025 15:08:45 +0000 (0:00:08.489) 0:04:24.891 ********* 2025-08-29 15:08:55.359833 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:08:55.359837 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:08:55.359841 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:08:55.359845 | orchestrator | 2025-08-29 15:08:55.359852 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:08:55.359856 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:08:55.359861 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:55.359865 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:55.359869 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:55.359873 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:55.359877 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:55.359880 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:55.359884 | orchestrator | 2025-08-29 15:08:55.359888 | orchestrator | 2025-08-29 15:08:55.359892 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:08:55.359896 | orchestrator | Friday 29 August 2025 15:08:52 +0000 (0:00:07.729) 0:04:32.620 ********* 2025-08-29 15:08:55.359900 | orchestrator | =============================================================================== 2025-08-29 15:08:55.359904 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 44.99s 2025-08-29 15:08:55.359907 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 38.09s 2025-08-29 15:08:55.359911 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.76s 2025-08-29 15:08:55.359915 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.10s 2025-08-29 15:08:55.359919 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.19s 2025-08-29 15:08:55.359922 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 14.68s 2025-08-29 15:08:55.359930 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.32s 2025-08-29 15:08:55.359934 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.49s 2025-08-29 15:08:55.359938 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.90s 2025-08-29 15:08:55.359942 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.75s 2025-08-29 15:08:55.359945 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.73s 2025-08-29 15:08:55.359949 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.85s 2025-08-29 15:08:55.359953 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 6.03s 2025-08-29 15:08:55.359957 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 5.34s 2025-08-29 15:08:55.359961 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.17s 2025-08-29 15:08:55.359964 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.12s 2025-08-29 15:08:55.359968 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.94s 2025-08-29 15:08:55.359972 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.62s 2025-08-29 15:08:55.359975 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.74s 2025-08-29 15:08:55.359979 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.12s 2025-08-29 15:08:58.409270 | orchestrator | 2025-08-29 15:08:58 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:08:58.410419 | orchestrator | 2025-08-29 15:08:58 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:08:58.411922 | orchestrator | 2025-08-29 15:08:58 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:08:58.413522 | orchestrator | 2025-08-29 15:08:58 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:08:58.413565 | orchestrator | 2025-08-29 15:08:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:01.464207 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:01.466009 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:01.467254 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:01.468795 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:01.468850 | orchestrator | 2025-08-29 15:09:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:04.515871 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:04.519602 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:04.521774 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:04.524078 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:04.524214 | orchestrator | 2025-08-29 15:09:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:07.561778 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:07.563105 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:07.563543 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:07.565099 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:07.565231 | orchestrator | 2025-08-29 15:09:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:10.603093 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:10.606273 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:10.607945 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:10.608888 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:10.609151 | orchestrator | 2025-08-29 15:09:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:13.671108 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:13.671896 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:13.673659 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:13.675845 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:13.676400 | orchestrator | 2025-08-29 15:09:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:16.837779 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:16.837854 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:16.837860 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:16.837865 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:16.837871 | orchestrator | 2025-08-29 15:09:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:19.862521 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:19.864706 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:19.866396 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:19.868657 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state STARTED 2025-08-29 15:09:19.868898 | orchestrator | 2025-08-29 15:09:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:22.914840 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:22.920922 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:22.921992 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:22.923019 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:22.925511 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task 3a675d39-c920-4395-bae0-6b076c389e61 is in state SUCCESS 2025-08-29 15:09:22.925737 | orchestrator | 2025-08-29 15:09:22.927806 | orchestrator | 2025-08-29 15:09:22.927847 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:09:22.927873 | orchestrator | 2025-08-29 15:09:22.927880 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:09:22.927886 | orchestrator | Friday 29 August 2025 15:04:36 +0000 (0:00:00.579) 0:00:00.579 ********* 2025-08-29 15:09:22.927892 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:22.927900 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:22.927908 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:22.927916 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:09:22.927921 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:09:22.927926 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:09:22.927932 | orchestrator | 2025-08-29 15:09:22.927937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:09:22.927943 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:00.956) 0:00:01.537 ********* 2025-08-29 15:09:22.927948 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 15:09:22.927954 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 15:09:22.927960 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 15:09:22.927965 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 15:09:22.927970 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 15:09:22.927976 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 15:09:22.927984 | orchestrator | 2025-08-29 15:09:22.927993 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 15:09:22.928001 | orchestrator | 2025-08-29 15:09:22.928010 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:09:22.928019 | orchestrator | Friday 29 August 2025 15:04:38 +0000 (0:00:00.887) 0:00:02.425 ********* 2025-08-29 15:09:22.928026 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:09:22.928050 | orchestrator | 2025-08-29 15:09:22.928056 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 15:09:22.928061 | orchestrator | Friday 29 August 2025 15:04:40 +0000 (0:00:01.457) 0:00:03.883 ********* 2025-08-29 15:09:22.928068 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 15:09:22.928078 | orchestrator | 2025-08-29 15:09:22.928087 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 15:09:22.928096 | orchestrator | Friday 29 August 2025 15:04:43 +0000 (0:00:02.984) 0:00:06.868 ********* 2025-08-29 15:09:22.928106 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 15:09:22.928115 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 15:09:22.928160 | orchestrator | 2025-08-29 15:09:22.928171 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 15:09:22.928181 | orchestrator | Friday 29 August 2025 15:04:49 +0000 (0:00:06.137) 0:00:13.005 ********* 2025-08-29 15:09:22.928190 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:09:22.928200 | orchestrator | 2025-08-29 15:09:22.928210 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 15:09:22.928220 | orchestrator | Friday 29 August 2025 15:04:52 +0000 (0:00:03.123) 0:00:16.129 ********* 2025-08-29 15:09:22.928229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:09:22.928239 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 15:09:22.928248 | orchestrator | 2025-08-29 15:09:22.928256 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 15:09:22.928265 | orchestrator | Friday 29 August 2025 15:04:56 +0000 (0:00:03.624) 0:00:19.754 ********* 2025-08-29 15:09:22.928274 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:09:22.928284 | orchestrator | 2025-08-29 15:09:22.928293 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 15:09:22.928311 | orchestrator | Friday 29 August 2025 15:05:00 +0000 (0:00:04.681) 0:00:24.436 ********* 2025-08-29 15:09:22.928320 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 15:09:22.928329 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 15:09:22.928338 | orchestrator | 2025-08-29 15:09:22.928347 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 15:09:22.928402 | orchestrator | Friday 29 August 2025 15:05:09 +0000 (0:00:08.622) 0:00:33.058 ********* 2025-08-29 15:09:22.928436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.928446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.928588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.928601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.928679 | orchestrator | 2025-08-29 15:09:22.928685 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:09:22.928690 | orchestrator | Friday 29 August 2025 15:05:17 +0000 (0:00:08.479) 0:00:41.537 ********* 2025-08-29 15:09:22.928696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.928702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.928707 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.928713 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.928718 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.928723 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.928729 | orchestrator | 2025-08-29 15:09:22.928734 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:09:22.928740 | orchestrator | Friday 29 August 2025 15:05:19 +0000 (0:00:01.533) 0:00:43.071 ********* 2025-08-29 15:09:22.928745 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.928751 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.928756 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.928761 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:09:22.928767 | orchestrator | 2025-08-29 15:09:22.928772 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 15:09:22.928778 | orchestrator | Friday 29 August 2025 15:05:21 +0000 (0:00:01.892) 0:00:44.963 ********* 2025-08-29 15:09:22.928783 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:09:22.928789 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:09:22.928794 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:09:22.928800 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:09:22.928805 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:09:22.928811 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:09:22.928821 | orchestrator | 2025-08-29 15:09:22.928826 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 15:09:22.928831 | orchestrator | Friday 29 August 2025 15:05:23 +0000 (0:00:02.625) 0:00:47.588 ********* 2025-08-29 15:09:22.928839 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:09:22.928846 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:09:22.928860 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:09:22.928866 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:09:22.928873 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:09:22.928883 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:09:22.928889 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:09:22.928903 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:09:22.928909 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:09:22.928918 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:09:22.928934 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:09:22.928944 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:09:22.928952 | orchestrator | 2025-08-29 15:09:22.928958 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 15:09:22.928983 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:04.844) 0:00:52.433 ********* 2025-08-29 15:09:22.928989 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:22.928997 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:22.929007 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:22.929016 | orchestrator | 2025-08-29 15:09:22.929025 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 15:09:22.929034 | orchestrator | Friday 29 August 2025 15:05:32 +0000 (0:00:03.414) 0:00:55.847 ********* 2025-08-29 15:09:22.929057 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 15:09:22.929087 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 15:09:22.929093 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 15:09:22.929099 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:09:22.929104 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:09:22.929110 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:09:22.929115 | orchestrator | 2025-08-29 15:09:22.929121 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:09:22.929126 | orchestrator | Friday 29 August 2025 15:05:36 +0000 (0:00:03.979) 0:00:59.827 ********* 2025-08-29 15:09:22.929138 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:09:22.929143 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:09:22.929149 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:09:22.929154 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:09:22.929160 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:09:22.929165 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:09:22.929170 | orchestrator | 2025-08-29 15:09:22.929179 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 15:09:22.929185 | orchestrator | Friday 29 August 2025 15:05:37 +0000 (0:00:01.442) 0:01:01.270 ********* 2025-08-29 15:09:22.929192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.929201 | orchestrator | 2025-08-29 15:09:22.929210 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 15:09:22.929220 | orchestrator | Friday 29 August 2025 15:05:37 +0000 (0:00:00.301) 0:01:01.571 ********* 2025-08-29 15:09:22.929225 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.929250 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.929255 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.929261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.929266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.929271 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.929277 | orchestrator | 2025-08-29 15:09:22.929282 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:09:22.929288 | orchestrator | Friday 29 August 2025 15:05:40 +0000 (0:00:02.598) 0:01:04.170 ********* 2025-08-29 15:09:22.929294 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:09:22.929301 | orchestrator | 2025-08-29 15:09:22.929307 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 15:09:22.929312 | orchestrator | Friday 29 August 2025 15:05:43 +0000 (0:00:03.445) 0:01:07.615 ********* 2025-08-29 15:09:22.929318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.929325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.929348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.929378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929740 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.929751 | orchestrator | 2025-08-29 15:09:22.929757 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 15:09:22.929763 | orchestrator | Friday 29 August 2025 15:05:49 +0000 (0:00:05.396) 0:01:13.011 ********* 2025-08-29 15:09:22.929776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.929787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929793 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.929799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.929805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.929816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.929879 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.929885 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.929890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.929908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929933 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.929939 | orchestrator | 2025-08-29 15:09:22.929944 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 15:09:22.929950 | orchestrator | Friday 29 August 2025 15:05:51 +0000 (0:00:02.270) 0:01:15.281 ********* 2025-08-29 15:09:22.929956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.929961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.929973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.929983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.929989 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.930002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.930008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930061 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.930069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930086 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.930091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.930121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930133 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.930138 | orchestrator | 2025-08-29 15:09:22.930144 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 15:09:22.930149 | orchestrator | Friday 29 August 2025 15:05:54 +0000 (0:00:03.209) 0:01:18.491 ********* 2025-08-29 15:09:22.930155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930254 | orchestrator | 2025-08-29 15:09:22.930260 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 15:09:22.930265 | orchestrator | Friday 29 August 2025 15:05:59 +0000 (0:00:04.285) 0:01:22.777 ********* 2025-08-29 15:09:22.930271 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:09:22.930276 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.930282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:09:22.930288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:09:22.930293 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:09:22.930298 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.930304 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:09:22.930310 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.930321 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:09:22.930327 | orchestrator | 2025-08-29 15:09:22.930333 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 15:09:22.930338 | orchestrator | Friday 29 August 2025 15:06:02 +0000 (0:00:03.784) 0:01:26.561 ********* 2025-08-29 15:09:22.930344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930464 | orchestrator | 2025-08-29 15:09:22.930470 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 15:09:22.930476 | orchestrator | Friday 29 August 2025 15:06:18 +0000 (0:00:15.820) 0:01:42.382 ********* 2025-08-29 15:09:22.930481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.930487 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.930492 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.930498 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:09:22.930503 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:09:22.930508 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:09:22.930514 | orchestrator | 2025-08-29 15:09:22.930519 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 15:09:22.930525 | orchestrator | Friday 29 August 2025 15:06:22 +0000 (0:00:03.329) 0:01:45.712 ********* 2025-08-29 15:09:22.930530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.930536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.930555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930565 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.930571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.930577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:09:22.930583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930589 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.930594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930606 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.930619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930638 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.930643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:09:22.930655 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.930661 | orchestrator | 2025-08-29 15:09:22.930666 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 15:09:22.930672 | orchestrator | Friday 29 August 2025 15:06:25 +0000 (0:00:03.620) 0:01:49.332 ********* 2025-08-29 15:09:22.930678 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.930683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.930688 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.930694 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.930699 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.930705 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.930710 | orchestrator | 2025-08-29 15:09:22.930716 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 15:09:22.930721 | orchestrator | Friday 29 August 2025 15:06:27 +0000 (0:00:01.479) 0:01:50.811 ********* 2025-08-29 15:09:22.930736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:09:22.930761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:22.930848 | orchestrator | 2025-08-29 15:09:22.930853 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:09:22.930859 | orchestrator | Friday 29 August 2025 15:06:32 +0000 (0:00:04.948) 0:01:55.759 ********* 2025-08-29 15:09:22.930865 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.930870 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:22.930876 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:22.930881 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:22.930886 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:22.930892 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:22.930897 | orchestrator | 2025-08-29 15:09:22.930903 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 15:09:22.930908 | orchestrator | Friday 29 August 2025 15:06:33 +0000 (0:00:01.343) 0:01:57.102 ********* 2025-08-29 15:09:22.930914 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:22.930919 | orchestrator | 2025-08-29 15:09:22.930925 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 15:09:22.930930 | orchestrator | Friday 29 August 2025 15:06:35 +0000 (0:00:02.453) 0:01:59.556 ********* 2025-08-29 15:09:22.930935 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:22.930941 | orchestrator | 2025-08-29 15:09:22.930946 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 15:09:22.930952 | orchestrator | Friday 29 August 2025 15:06:38 +0000 (0:00:02.371) 0:02:01.927 ********* 2025-08-29 15:09:22.930957 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:22.930963 | orchestrator | 2025-08-29 15:09:22.930968 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:09:22.930974 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:18.739) 0:02:20.667 ********* 2025-08-29 15:09:22.930980 | orchestrator | 2025-08-29 15:09:22.930985 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:09:22.930991 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.167) 0:02:20.834 ********* 2025-08-29 15:09:22.930996 | orchestrator | 2025-08-29 15:09:22.931001 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:09:22.931007 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.076) 0:02:20.910 ********* 2025-08-29 15:09:22.931012 | orchestrator | 2025-08-29 15:09:22.931017 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:09:22.931023 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.171) 0:02:21.082 ********* 2025-08-29 15:09:22.931028 | orchestrator | 2025-08-29 15:09:22.931034 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:09:22.931039 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.161) 0:02:21.243 ********* 2025-08-29 15:09:22.931045 | orchestrator | 2025-08-29 15:09:22.931050 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:09:22.931060 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.088) 0:02:21.332 ********* 2025-08-29 15:09:22.931066 | orchestrator | 2025-08-29 15:09:22.931071 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 15:09:22.931077 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.074) 0:02:21.406 ********* 2025-08-29 15:09:22.931083 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:22.931088 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:22.931094 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:22.931099 | orchestrator | 2025-08-29 15:09:22.931105 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 15:09:22.931110 | orchestrator | Friday 29 August 2025 15:07:25 +0000 (0:00:28.169) 0:02:49.575 ********* 2025-08-29 15:09:22.931116 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:22.931122 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:22.931127 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:22.931133 | orchestrator | 2025-08-29 15:09:22.931138 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 15:09:22.931144 | orchestrator | Friday 29 August 2025 15:07:39 +0000 (0:00:13.378) 0:03:02.954 ********* 2025-08-29 15:09:22.931149 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:09:22.931155 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:09:22.931160 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:09:22.931165 | orchestrator | 2025-08-29 15:09:22.931171 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 15:09:22.931177 | orchestrator | Friday 29 August 2025 15:09:05 +0000 (0:01:25.850) 0:04:28.804 ********* 2025-08-29 15:09:22.931182 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:09:22.931187 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:09:22.931193 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:09:22.931199 | orchestrator | 2025-08-29 15:09:22.931204 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 15:09:22.931210 | orchestrator | Friday 29 August 2025 15:09:17 +0000 (0:00:12.610) 0:04:41.414 ********* 2025-08-29 15:09:22.931215 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:22.931221 | orchestrator | 2025-08-29 15:09:22.931226 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:09:22.931266 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:09:22.931274 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:09:22.931280 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:09:22.931285 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:09:22.931291 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:09:22.931296 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:09:22.931302 | orchestrator | 2025-08-29 15:09:22.931308 | orchestrator | 2025-08-29 15:09:22.931313 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:09:22.931319 | orchestrator | Friday 29 August 2025 15:09:19 +0000 (0:00:01.279) 0:04:42.694 ********* 2025-08-29 15:09:22.931324 | orchestrator | =============================================================================== 2025-08-29 15:09:22.931330 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 85.85s 2025-08-29 15:09:22.931336 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.17s 2025-08-29 15:09:22.931345 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.74s 2025-08-29 15:09:22.931398 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.82s 2025-08-29 15:09:22.931406 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.38s 2025-08-29 15:09:22.931411 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.61s 2025-08-29 15:09:22.931416 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.62s 2025-08-29 15:09:22.931422 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 8.48s 2025-08-29 15:09:22.931427 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.14s 2025-08-29 15:09:22.931433 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.40s 2025-08-29 15:09:22.931438 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.95s 2025-08-29 15:09:22.931444 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.84s 2025-08-29 15:09:22.931449 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.68s 2025-08-29 15:09:22.931454 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.29s 2025-08-29 15:09:22.931458 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.98s 2025-08-29 15:09:22.931463 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.78s 2025-08-29 15:09:22.931468 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.62s 2025-08-29 15:09:22.931473 | orchestrator | cinder : Copying over existing policy file ------------------------------ 3.62s 2025-08-29 15:09:22.931478 | orchestrator | cinder : include_tasks -------------------------------------------------- 3.45s 2025-08-29 15:09:22.931482 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.41s 2025-08-29 15:09:22.931487 | orchestrator | 2025-08-29 15:09:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:25.966106 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:25.967505 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:25.969001 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:25.969755 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:25.970199 | orchestrator | 2025-08-29 15:09:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:29.016139 | orchestrator | 2025-08-29 15:09:29 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:29.017269 | orchestrator | 2025-08-29 15:09:29 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:29.018528 | orchestrator | 2025-08-29 15:09:29 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:29.020008 | orchestrator | 2025-08-29 15:09:29 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:29.020055 | orchestrator | 2025-08-29 15:09:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:32.068459 | orchestrator | 2025-08-29 15:09:32 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:32.069130 | orchestrator | 2025-08-29 15:09:32 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:32.069906 | orchestrator | 2025-08-29 15:09:32 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:32.070885 | orchestrator | 2025-08-29 15:09:32 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:32.070968 | orchestrator | 2025-08-29 15:09:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:35.140393 | orchestrator | 2025-08-29 15:09:35 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:35.142793 | orchestrator | 2025-08-29 15:09:35 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:35.144472 | orchestrator | 2025-08-29 15:09:35 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:35.146198 | orchestrator | 2025-08-29 15:09:35 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:35.146255 | orchestrator | 2025-08-29 15:09:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:38.185167 | orchestrator | 2025-08-29 15:09:38 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:38.185746 | orchestrator | 2025-08-29 15:09:38 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:38.187092 | orchestrator | 2025-08-29 15:09:38 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:38.188605 | orchestrator | 2025-08-29 15:09:38 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:38.188641 | orchestrator | 2025-08-29 15:09:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:41.240649 | orchestrator | 2025-08-29 15:09:41 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:41.241268 | orchestrator | 2025-08-29 15:09:41 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:41.242472 | orchestrator | 2025-08-29 15:09:41 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:41.243326 | orchestrator | 2025-08-29 15:09:41 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:41.243383 | orchestrator | 2025-08-29 15:09:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:44.280424 | orchestrator | 2025-08-29 15:09:44 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:44.280804 | orchestrator | 2025-08-29 15:09:44 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:44.283418 | orchestrator | 2025-08-29 15:09:44 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:44.284159 | orchestrator | 2025-08-29 15:09:44 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:44.284208 | orchestrator | 2025-08-29 15:09:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:47.335184 | orchestrator | 2025-08-29 15:09:47 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:47.338365 | orchestrator | 2025-08-29 15:09:47 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:47.339545 | orchestrator | 2025-08-29 15:09:47 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:47.341639 | orchestrator | 2025-08-29 15:09:47 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:47.341674 | orchestrator | 2025-08-29 15:09:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:50.376851 | orchestrator | 2025-08-29 15:09:50 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:50.378225 | orchestrator | 2025-08-29 15:09:50 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:50.380620 | orchestrator | 2025-08-29 15:09:50 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:50.382603 | orchestrator | 2025-08-29 15:09:50 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:50.382641 | orchestrator | 2025-08-29 15:09:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:53.418762 | orchestrator | 2025-08-29 15:09:53 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:53.419676 | orchestrator | 2025-08-29 15:09:53 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:53.421397 | orchestrator | 2025-08-29 15:09:53 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:53.423567 | orchestrator | 2025-08-29 15:09:53 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:53.423629 | orchestrator | 2025-08-29 15:09:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:56.461792 | orchestrator | 2025-08-29 15:09:56 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:56.462256 | orchestrator | 2025-08-29 15:09:56 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:56.465627 | orchestrator | 2025-08-29 15:09:56 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:56.466390 | orchestrator | 2025-08-29 15:09:56 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:56.466414 | orchestrator | 2025-08-29 15:09:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:59.513096 | orchestrator | 2025-08-29 15:09:59 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:09:59.513992 | orchestrator | 2025-08-29 15:09:59 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:09:59.515058 | orchestrator | 2025-08-29 15:09:59 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:09:59.515949 | orchestrator | 2025-08-29 15:09:59 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:09:59.517386 | orchestrator | 2025-08-29 15:09:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:02.559582 | orchestrator | 2025-08-29 15:10:02 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:02.560535 | orchestrator | 2025-08-29 15:10:02 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:02.561654 | orchestrator | 2025-08-29 15:10:02 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:02.562776 | orchestrator | 2025-08-29 15:10:02 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:02.562971 | orchestrator | 2025-08-29 15:10:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:05.604957 | orchestrator | 2025-08-29 15:10:05 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:05.606192 | orchestrator | 2025-08-29 15:10:05 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:05.607536 | orchestrator | 2025-08-29 15:10:05 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:05.608467 | orchestrator | 2025-08-29 15:10:05 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:05.608509 | orchestrator | 2025-08-29 15:10:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:08.653217 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:08.654244 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:08.656777 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:08.658633 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:08.658702 | orchestrator | 2025-08-29 15:10:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:11.707379 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:11.708227 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:11.708898 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:11.709961 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:11.710003 | orchestrator | 2025-08-29 15:10:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:14.758726 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:14.758824 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:14.759618 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:14.762146 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:14.762193 | orchestrator | 2025-08-29 15:10:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:17.814155 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:17.815658 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:17.820784 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:17.822575 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:17.822628 | orchestrator | 2025-08-29 15:10:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:20.868028 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:20.868206 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:20.869573 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:20.870567 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:20.870584 | orchestrator | 2025-08-29 15:10:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:23.917837 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:23.919906 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:23.923978 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:23.924530 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:23.925333 | orchestrator | 2025-08-29 15:10:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:26.961165 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:26.961935 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:26.963420 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:26.964585 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:26.964698 | orchestrator | 2025-08-29 15:10:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:30.065149 | orchestrator | 2025-08-29 15:10:30 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:30.066095 | orchestrator | 2025-08-29 15:10:30 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:30.067468 | orchestrator | 2025-08-29 15:10:30 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:30.068697 | orchestrator | 2025-08-29 15:10:30 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:30.068732 | orchestrator | 2025-08-29 15:10:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:33.126657 | orchestrator | 2025-08-29 15:10:33 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:33.127172 | orchestrator | 2025-08-29 15:10:33 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:33.128054 | orchestrator | 2025-08-29 15:10:33 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:33.128921 | orchestrator | 2025-08-29 15:10:33 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:33.128936 | orchestrator | 2025-08-29 15:10:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:36.176173 | orchestrator | 2025-08-29 15:10:36 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:36.176657 | orchestrator | 2025-08-29 15:10:36 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:36.177487 | orchestrator | 2025-08-29 15:10:36 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:36.178440 | orchestrator | 2025-08-29 15:10:36 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:36.178466 | orchestrator | 2025-08-29 15:10:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:39.213605 | orchestrator | 2025-08-29 15:10:39 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:39.214382 | orchestrator | 2025-08-29 15:10:39 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:39.215202 | orchestrator | 2025-08-29 15:10:39 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:39.216068 | orchestrator | 2025-08-29 15:10:39 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:39.216090 | orchestrator | 2025-08-29 15:10:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:42.263817 | orchestrator | 2025-08-29 15:10:42 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:42.263951 | orchestrator | 2025-08-29 15:10:42 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:42.264444 | orchestrator | 2025-08-29 15:10:42 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:42.268209 | orchestrator | 2025-08-29 15:10:42 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:42.268284 | orchestrator | 2025-08-29 15:10:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:45.324337 | orchestrator | 2025-08-29 15:10:45 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:45.324886 | orchestrator | 2025-08-29 15:10:45 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:45.325936 | orchestrator | 2025-08-29 15:10:45 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:45.326819 | orchestrator | 2025-08-29 15:10:45 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:45.326850 | orchestrator | 2025-08-29 15:10:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:48.385518 | orchestrator | 2025-08-29 15:10:48 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:48.385984 | orchestrator | 2025-08-29 15:10:48 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:48.386712 | orchestrator | 2025-08-29 15:10:48 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:48.387474 | orchestrator | 2025-08-29 15:10:48 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:48.387492 | orchestrator | 2025-08-29 15:10:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:51.412429 | orchestrator | 2025-08-29 15:10:51 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:51.412964 | orchestrator | 2025-08-29 15:10:51 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:51.413853 | orchestrator | 2025-08-29 15:10:51 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:51.414496 | orchestrator | 2025-08-29 15:10:51 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:51.414567 | orchestrator | 2025-08-29 15:10:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:54.472459 | orchestrator | 2025-08-29 15:10:54 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:54.474289 | orchestrator | 2025-08-29 15:10:54 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:54.475146 | orchestrator | 2025-08-29 15:10:54 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:54.477076 | orchestrator | 2025-08-29 15:10:54 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:54.477123 | orchestrator | 2025-08-29 15:10:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:57.522651 | orchestrator | 2025-08-29 15:10:57 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:10:57.523699 | orchestrator | 2025-08-29 15:10:57 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:10:57.524187 | orchestrator | 2025-08-29 15:10:57 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:10:57.525618 | orchestrator | 2025-08-29 15:10:57 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:10:57.526712 | orchestrator | 2025-08-29 15:10:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:00.585186 | orchestrator | 2025-08-29 15:11:00 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:00.585664 | orchestrator | 2025-08-29 15:11:00 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:00.586618 | orchestrator | 2025-08-29 15:11:00 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:00.587527 | orchestrator | 2025-08-29 15:11:00 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:00.587562 | orchestrator | 2025-08-29 15:11:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:03.637666 | orchestrator | 2025-08-29 15:11:03 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:03.638719 | orchestrator | 2025-08-29 15:11:03 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:03.639539 | orchestrator | 2025-08-29 15:11:03 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:03.640366 | orchestrator | 2025-08-29 15:11:03 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:03.640394 | orchestrator | 2025-08-29 15:11:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:06.681220 | orchestrator | 2025-08-29 15:11:06 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:06.681894 | orchestrator | 2025-08-29 15:11:06 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:06.683417 | orchestrator | 2025-08-29 15:11:06 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:06.685068 | orchestrator | 2025-08-29 15:11:06 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:06.685271 | orchestrator | 2025-08-29 15:11:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:09.735241 | orchestrator | 2025-08-29 15:11:09 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:09.736175 | orchestrator | 2025-08-29 15:11:09 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:09.737181 | orchestrator | 2025-08-29 15:11:09 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:09.741422 | orchestrator | 2025-08-29 15:11:09 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:09.741519 | orchestrator | 2025-08-29 15:11:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:12.784606 | orchestrator | 2025-08-29 15:11:12 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:12.785346 | orchestrator | 2025-08-29 15:11:12 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:12.786666 | orchestrator | 2025-08-29 15:11:12 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:12.788014 | orchestrator | 2025-08-29 15:11:12 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:12.788055 | orchestrator | 2025-08-29 15:11:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:15.830684 | orchestrator | 2025-08-29 15:11:15 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:15.831089 | orchestrator | 2025-08-29 15:11:15 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:15.832122 | orchestrator | 2025-08-29 15:11:15 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:15.832909 | orchestrator | 2025-08-29 15:11:15 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:15.832964 | orchestrator | 2025-08-29 15:11:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:18.878546 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:18.879383 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:18.880252 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:18.882940 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:18.882987 | orchestrator | 2025-08-29 15:11:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:21.916725 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:21.917746 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:21.919053 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:21.920221 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:21.920260 | orchestrator | 2025-08-29 15:11:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:24.966745 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:24.968076 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:24.970408 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:24.972486 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:24.972533 | orchestrator | 2025-08-29 15:11:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:28.045492 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:28.045598 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:28.045616 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:28.045780 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state STARTED 2025-08-29 15:11:28.045850 | orchestrator | 2025-08-29 15:11:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:31.105374 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:31.107219 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:31.108496 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:31.110061 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:31.113723 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task 5031c75a-e900-4489-b8c2-cf2b4e4c34cf is in state SUCCESS 2025-08-29 15:11:31.115684 | orchestrator | 2025-08-29 15:11:31.115763 | orchestrator | 2025-08-29 15:11:31.115774 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:11:31.115782 | orchestrator | 2025-08-29 15:11:31.115789 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:11:31.115813 | orchestrator | Friday 29 August 2025 15:08:58 +0000 (0:00:00.352) 0:00:00.352 ********* 2025-08-29 15:11:31.115817 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:11:31.115822 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:11:31.115826 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:11:31.115830 | orchestrator | 2025-08-29 15:11:31.115834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:11:31.115838 | orchestrator | Friday 29 August 2025 15:08:59 +0000 (0:00:00.347) 0:00:00.700 ********* 2025-08-29 15:11:31.115842 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 15:11:31.115847 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 15:11:31.115850 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 15:11:31.115854 | orchestrator | 2025-08-29 15:11:31.115858 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 15:11:31.115862 | orchestrator | 2025-08-29 15:11:31.115866 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:11:31.115869 | orchestrator | Friday 29 August 2025 15:08:59 +0000 (0:00:00.564) 0:00:01.264 ********* 2025-08-29 15:11:31.115874 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:11:31.115878 | orchestrator | 2025-08-29 15:11:31.115882 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 15:11:31.115886 | orchestrator | Friday 29 August 2025 15:09:00 +0000 (0:00:00.792) 0:00:02.056 ********* 2025-08-29 15:11:31.115890 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 15:11:31.115894 | orchestrator | 2025-08-29 15:11:31.115897 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 15:11:31.115901 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:03.480) 0:00:05.537 ********* 2025-08-29 15:11:31.115905 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 15:11:31.115920 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 15:11:31.115924 | orchestrator | 2025-08-29 15:11:31.115928 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 15:11:31.115932 | orchestrator | Friday 29 August 2025 15:09:10 +0000 (0:00:06.802) 0:00:12.342 ********* 2025-08-29 15:11:31.115935 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:11:31.115939 | orchestrator | 2025-08-29 15:11:31.115943 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 15:11:31.115947 | orchestrator | Friday 29 August 2025 15:09:14 +0000 (0:00:03.606) 0:00:15.948 ********* 2025-08-29 15:11:31.115951 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:11:31.115955 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 15:11:31.115959 | orchestrator | 2025-08-29 15:11:31.115963 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 15:11:31.115966 | orchestrator | Friday 29 August 2025 15:09:18 +0000 (0:00:03.995) 0:00:19.944 ********* 2025-08-29 15:11:31.115970 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:11:31.115974 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 15:11:31.115978 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 15:11:31.115981 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 15:11:31.115985 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 15:11:31.115989 | orchestrator | 2025-08-29 15:11:31.115993 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 15:11:31.115996 | orchestrator | Friday 29 August 2025 15:09:34 +0000 (0:00:15.981) 0:00:35.925 ********* 2025-08-29 15:11:31.116000 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 15:11:31.116004 | orchestrator | 2025-08-29 15:11:31.116007 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 15:11:31.116015 | orchestrator | Friday 29 August 2025 15:09:39 +0000 (0:00:04.940) 0:00:40.866 ********* 2025-08-29 15:11:31.116021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116088 | orchestrator | 2025-08-29 15:11:31.116092 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 15:11:31.116099 | orchestrator | Friday 29 August 2025 15:09:41 +0000 (0:00:02.116) 0:00:42.982 ********* 2025-08-29 15:11:31.116103 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 15:11:31.116106 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 15:11:31.116110 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 15:11:31.116114 | orchestrator | 2025-08-29 15:11:31.116118 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 15:11:31.116121 | orchestrator | Friday 29 August 2025 15:09:43 +0000 (0:00:01.652) 0:00:44.634 ********* 2025-08-29 15:11:31.116125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.116130 | orchestrator | 2025-08-29 15:11:31.116137 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 15:11:31.116171 | orchestrator | Friday 29 August 2025 15:09:43 +0000 (0:00:00.170) 0:00:44.804 ********* 2025-08-29 15:11:31.116180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.116184 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:31.116188 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:31.116191 | orchestrator | 2025-08-29 15:11:31.116195 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:11:31.116199 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.675) 0:00:45.479 ********* 2025-08-29 15:11:31.116202 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:11:31.116206 | orchestrator | 2025-08-29 15:11:31.116232 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 15:11:31.116236 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.658) 0:00:46.138 ********* 2025-08-29 15:11:31.116240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116443 | orchestrator | 2025-08-29 15:11:31.116447 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 15:11:31.116451 | orchestrator | Friday 29 August 2025 15:09:49 +0000 (0:00:05.195) 0:00:51.334 ********* 2025-08-29 15:11:31.116460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.116468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116478 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.116487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.116492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116508 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:31.116512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.116517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116525 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:31.116530 | orchestrator | 2025-08-29 15:11:31.116537 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 15:11:31.116541 | orchestrator | Friday 29 August 2025 15:09:52 +0000 (0:00:02.416) 0:00:53.751 ********* 2025-08-29 15:11:31.116546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.116553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.116574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.116579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116590 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:31.116594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.116606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.116614 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:31.116618 | orchestrator | 2025-08-29 15:11:31.116621 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 15:11:31.116625 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:01.165) 0:00:54.916 ********* 2025-08-29 15:11:31.116629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.116878 | orchestrator | 2025-08-29 15:11:31.116882 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 15:11:31.116888 | orchestrator | Friday 29 August 2025 15:09:59 +0000 (0:00:05.775) 0:01:00.691 ********* 2025-08-29 15:11:31.116894 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.116904 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:31.116910 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:31.116918 | orchestrator | 2025-08-29 15:11:31.116926 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 15:11:31.116933 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:00:04.351) 0:01:05.043 ********* 2025-08-29 15:11:31.116938 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:11:31.116943 | orchestrator | 2025-08-29 15:11:31.116949 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 15:11:31.116955 | orchestrator | Friday 29 August 2025 15:10:05 +0000 (0:00:01.432) 0:01:06.476 ********* 2025-08-29 15:11:31.116960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.116966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:31.116971 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:31.116975 | orchestrator | 2025-08-29 15:11:31.116979 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 15:11:31.116983 | orchestrator | Friday 29 August 2025 15:10:07 +0000 (0:00:02.186) 0:01:08.662 ********* 2025-08-29 15:11:31.116987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.116995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.117003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.117010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117079 | orchestrator | 2025-08-29 15:11:31.117085 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 15:11:31.117089 | orchestrator | Friday 29 August 2025 15:10:23 +0000 (0:00:16.310) 0:01:24.972 ********* 2025-08-29 15:11:31.117097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.117101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.117105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.117109 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.117117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.117125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.117155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.117159 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:31.117166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:11:31.117170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.117174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:11:31.117182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:31.117186 | orchestrator | 2025-08-29 15:11:31.117189 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 15:11:31.117193 | orchestrator | Friday 29 August 2025 15:10:25 +0000 (0:00:01.662) 0:01:26.635 ********* 2025-08-29 15:11:31.117201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.117205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.117212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:31.117216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:31.117253 | orchestrator | 2025-08-29 15:11:31.117256 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:11:31.117260 | orchestrator | Friday 29 August 2025 15:10:29 +0000 (0:00:04.582) 0:01:31.217 ********* 2025-08-29 15:11:31.117310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:31.117318 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:31.117322 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:31.117326 | orchestrator | 2025-08-29 15:11:31.117330 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 15:11:31.117334 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:01.273) 0:01:32.490 ********* 2025-08-29 15:11:31.117337 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.117345 | orchestrator | 2025-08-29 15:11:31.117349 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 15:11:31.117353 | orchestrator | Friday 29 August 2025 15:10:33 +0000 (0:00:02.638) 0:01:35.129 ********* 2025-08-29 15:11:31.117356 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.117360 | orchestrator | 2025-08-29 15:11:31.117364 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 15:11:31.117368 | orchestrator | Friday 29 August 2025 15:10:36 +0000 (0:00:03.278) 0:01:38.407 ********* 2025-08-29 15:11:31.117371 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.117375 | orchestrator | 2025-08-29 15:11:31.117379 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:11:31.117383 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:13.461) 0:01:51.869 ********* 2025-08-29 15:11:31.117386 | orchestrator | 2025-08-29 15:11:31.117390 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:11:31.117394 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:00.197) 0:01:52.066 ********* 2025-08-29 15:11:31.117398 | orchestrator | 2025-08-29 15:11:31.117401 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:11:31.117405 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:00.073) 0:01:52.140 ********* 2025-08-29 15:11:31.117409 | orchestrator | 2025-08-29 15:11:31.117412 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 15:11:31.117416 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:00.071) 0:01:52.211 ********* 2025-08-29 15:11:31.117420 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.117424 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:31.117428 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:31.117433 | orchestrator | 2025-08-29 15:11:31.117437 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 15:11:31.117441 | orchestrator | Friday 29 August 2025 15:11:02 +0000 (0:00:11.294) 0:02:03.506 ********* 2025-08-29 15:11:31.117445 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.117449 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:31.117457 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:31.117461 | orchestrator | 2025-08-29 15:11:31.117465 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 15:11:31.117470 | orchestrator | Friday 29 August 2025 15:11:15 +0000 (0:00:13.070) 0:02:16.577 ********* 2025-08-29 15:11:31.117474 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:31.117478 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:31.117482 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:31.117487 | orchestrator | 2025-08-29 15:11:31.117491 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:11:31.117496 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:11:31.117502 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:11:31.117506 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:11:31.117564 | orchestrator | 2025-08-29 15:11:31.117570 | orchestrator | 2025-08-29 15:11:31.117574 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:11:31.117577 | orchestrator | Friday 29 August 2025 15:11:27 +0000 (0:00:12.552) 0:02:29.129 ********* 2025-08-29 15:11:31.117581 | orchestrator | =============================================================================== 2025-08-29 15:11:31.117585 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 16.31s 2025-08-29 15:11:31.117589 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.98s 2025-08-29 15:11:31.117597 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.46s 2025-08-29 15:11:31.117600 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.07s 2025-08-29 15:11:31.117604 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.55s 2025-08-29 15:11:31.117608 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.29s 2025-08-29 15:11:31.117615 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.80s 2025-08-29 15:11:31.117619 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.78s 2025-08-29 15:11:31.117623 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.20s 2025-08-29 15:11:31.117626 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.94s 2025-08-29 15:11:31.117661 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.58s 2025-08-29 15:11:31.117665 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.35s 2025-08-29 15:11:31.117669 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.00s 2025-08-29 15:11:31.117673 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.61s 2025-08-29 15:11:31.117677 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.48s 2025-08-29 15:11:31.117681 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 3.28s 2025-08-29 15:11:31.117684 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.63s 2025-08-29 15:11:31.117705 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.42s 2025-08-29 15:11:31.117710 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.19s 2025-08-29 15:11:31.117713 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.12s 2025-08-29 15:11:31.117717 | orchestrator | 2025-08-29 15:11:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:34.181127 | orchestrator | 2025-08-29 15:11:34 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:34.182459 | orchestrator | 2025-08-29 15:11:34 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:34.183864 | orchestrator | 2025-08-29 15:11:34 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:34.185333 | orchestrator | 2025-08-29 15:11:34 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:34.185366 | orchestrator | 2025-08-29 15:11:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:37.239062 | orchestrator | 2025-08-29 15:11:37 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:37.240162 | orchestrator | 2025-08-29 15:11:37 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:37.241722 | orchestrator | 2025-08-29 15:11:37 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:37.243349 | orchestrator | 2025-08-29 15:11:37 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:37.243407 | orchestrator | 2025-08-29 15:11:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:40.300047 | orchestrator | 2025-08-29 15:11:40 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:40.300124 | orchestrator | 2025-08-29 15:11:40 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:40.300585 | orchestrator | 2025-08-29 15:11:40 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:40.305147 | orchestrator | 2025-08-29 15:11:40 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:40.305330 | orchestrator | 2025-08-29 15:11:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:43.350431 | orchestrator | 2025-08-29 15:11:43 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:43.350578 | orchestrator | 2025-08-29 15:11:43 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:43.351299 | orchestrator | 2025-08-29 15:11:43 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:43.351347 | orchestrator | 2025-08-29 15:11:43 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:43.351360 | orchestrator | 2025-08-29 15:11:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:46.399647 | orchestrator | 2025-08-29 15:11:46 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:46.401070 | orchestrator | 2025-08-29 15:11:46 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:46.404435 | orchestrator | 2025-08-29 15:11:46 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:46.405610 | orchestrator | 2025-08-29 15:11:46 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:46.405647 | orchestrator | 2025-08-29 15:11:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:49.488434 | orchestrator | 2025-08-29 15:11:49 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:49.489785 | orchestrator | 2025-08-29 15:11:49 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:49.491094 | orchestrator | 2025-08-29 15:11:49 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:49.492544 | orchestrator | 2025-08-29 15:11:49 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:49.492585 | orchestrator | 2025-08-29 15:11:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:52.545803 | orchestrator | 2025-08-29 15:11:52 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:52.545911 | orchestrator | 2025-08-29 15:11:52 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:52.545927 | orchestrator | 2025-08-29 15:11:52 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:52.545939 | orchestrator | 2025-08-29 15:11:52 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:52.545950 | orchestrator | 2025-08-29 15:11:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:55.610461 | orchestrator | 2025-08-29 15:11:55 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:55.610919 | orchestrator | 2025-08-29 15:11:55 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:55.611623 | orchestrator | 2025-08-29 15:11:55 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:55.612841 | orchestrator | 2025-08-29 15:11:55 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:55.612919 | orchestrator | 2025-08-29 15:11:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:58.673503 | orchestrator | 2025-08-29 15:11:58 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:11:58.678315 | orchestrator | 2025-08-29 15:11:58 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:11:58.682731 | orchestrator | 2025-08-29 15:11:58 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:11:58.685716 | orchestrator | 2025-08-29 15:11:58 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:11:58.685794 | orchestrator | 2025-08-29 15:11:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:01.758161 | orchestrator | 2025-08-29 15:12:01 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:01.761104 | orchestrator | 2025-08-29 15:12:01 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:01.763148 | orchestrator | 2025-08-29 15:12:01 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:01.766062 | orchestrator | 2025-08-29 15:12:01 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:01.766139 | orchestrator | 2025-08-29 15:12:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:04.829361 | orchestrator | 2025-08-29 15:12:04 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:04.831475 | orchestrator | 2025-08-29 15:12:04 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:04.833370 | orchestrator | 2025-08-29 15:12:04 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:04.835653 | orchestrator | 2025-08-29 15:12:04 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:04.835776 | orchestrator | 2025-08-29 15:12:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:07.932394 | orchestrator | 2025-08-29 15:12:07 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:07.933570 | orchestrator | 2025-08-29 15:12:07 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:07.934643 | orchestrator | 2025-08-29 15:12:07 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:07.937193 | orchestrator | 2025-08-29 15:12:07 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:07.937346 | orchestrator | 2025-08-29 15:12:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:10.989482 | orchestrator | 2025-08-29 15:12:10 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:10.990503 | orchestrator | 2025-08-29 15:12:10 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:10.991733 | orchestrator | 2025-08-29 15:12:10 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:10.992032 | orchestrator | 2025-08-29 15:12:10 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:10.992051 | orchestrator | 2025-08-29 15:12:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:14.068842 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:14.072475 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:14.076344 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:14.079837 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:14.079919 | orchestrator | 2025-08-29 15:12:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:17.120337 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:17.122505 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:17.124492 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:17.125548 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:17.125868 | orchestrator | 2025-08-29 15:12:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:20.206743 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:20.210352 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:20.215551 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:20.218533 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:20.218591 | orchestrator | 2025-08-29 15:12:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:23.278322 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:23.278458 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:23.278474 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:23.278569 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:23.278585 | orchestrator | 2025-08-29 15:12:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:26.323260 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:26.323866 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:26.324729 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:26.325579 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:26.325739 | orchestrator | 2025-08-29 15:12:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:29.368314 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:29.371992 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:29.374468 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:29.377542 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:29.377609 | orchestrator | 2025-08-29 15:12:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:32.442583 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:32.443324 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:32.444479 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:32.445442 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:32.445493 | orchestrator | 2025-08-29 15:12:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:35.495329 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:35.497508 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:35.498438 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:35.499579 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:35.499616 | orchestrator | 2025-08-29 15:12:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:38.564727 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:38.565869 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state STARTED 2025-08-29 15:12:38.583109 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:38.587367 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:38.587466 | orchestrator | 2025-08-29 15:12:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:41.639523 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:41.639710 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task d3b67bea-6cb1-4a03-87c1-ce352b84e388 is in state SUCCESS 2025-08-29 15:12:41.640354 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:41.641093 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:41.642166 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:41.642198 | orchestrator | 2025-08-29 15:12:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:44.690655 | orchestrator | 2025-08-29 15:12:44 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:44.691879 | orchestrator | 2025-08-29 15:12:44 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:44.692939 | orchestrator | 2025-08-29 15:12:44 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:44.693861 | orchestrator | 2025-08-29 15:12:44 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:44.693920 | orchestrator | 2025-08-29 15:12:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:47.732786 | orchestrator | 2025-08-29 15:12:47 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:47.733037 | orchestrator | 2025-08-29 15:12:47 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:47.735444 | orchestrator | 2025-08-29 15:12:47 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:47.735889 | orchestrator | 2025-08-29 15:12:47 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:47.735912 | orchestrator | 2025-08-29 15:12:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:50.766006 | orchestrator | 2025-08-29 15:12:50 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:50.766793 | orchestrator | 2025-08-29 15:12:50 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:50.767679 | orchestrator | 2025-08-29 15:12:50 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:50.769111 | orchestrator | 2025-08-29 15:12:50 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:50.769141 | orchestrator | 2025-08-29 15:12:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:53.809626 | orchestrator | 2025-08-29 15:12:53 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:53.811008 | orchestrator | 2025-08-29 15:12:53 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:53.814381 | orchestrator | 2025-08-29 15:12:53 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:53.815102 | orchestrator | 2025-08-29 15:12:53 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:53.815139 | orchestrator | 2025-08-29 15:12:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:56.852622 | orchestrator | 2025-08-29 15:12:56 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:56.853286 | orchestrator | 2025-08-29 15:12:56 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:56.854104 | orchestrator | 2025-08-29 15:12:56 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:56.855360 | orchestrator | 2025-08-29 15:12:56 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:56.855399 | orchestrator | 2025-08-29 15:12:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:59.909690 | orchestrator | 2025-08-29 15:12:59 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:12:59.909778 | orchestrator | 2025-08-29 15:12:59 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:12:59.910331 | orchestrator | 2025-08-29 15:12:59 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:12:59.913298 | orchestrator | 2025-08-29 15:12:59 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:12:59.913387 | orchestrator | 2025-08-29 15:12:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:02.961594 | orchestrator | 2025-08-29 15:13:02 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:02.963429 | orchestrator | 2025-08-29 15:13:02 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:02.964756 | orchestrator | 2025-08-29 15:13:02 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:02.966824 | orchestrator | 2025-08-29 15:13:02 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:02.966877 | orchestrator | 2025-08-29 15:13:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:06.022618 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:06.022940 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:06.024070 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:06.024880 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:06.024983 | orchestrator | 2025-08-29 15:13:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:09.158322 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:09.159265 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:09.161389 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:09.162300 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:09.162374 | orchestrator | 2025-08-29 15:13:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:12.196476 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:12.196893 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:12.198250 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:12.199626 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:12.199687 | orchestrator | 2025-08-29 15:13:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:15.243913 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:15.245988 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:15.248592 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:15.251273 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:15.251394 | orchestrator | 2025-08-29 15:13:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:18.309598 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:18.314456 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:18.316711 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:18.319925 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:18.320540 | orchestrator | 2025-08-29 15:13:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:21.389118 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:21.389238 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:21.392603 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:21.395148 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:21.395721 | orchestrator | 2025-08-29 15:13:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:24.486432 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:24.487443 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:24.489280 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:24.491037 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:24.491079 | orchestrator | 2025-08-29 15:13:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:27.525656 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:27.526313 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:27.528546 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:27.529435 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:27.529502 | orchestrator | 2025-08-29 15:13:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:30.603702 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:30.603991 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:30.604699 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:30.606336 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:30.606371 | orchestrator | 2025-08-29 15:13:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:33.652426 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:33.653975 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state STARTED 2025-08-29 15:13:33.654983 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:33.656284 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:33.656371 | orchestrator | 2025-08-29 15:13:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:36.717416 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:36.721905 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task 89fb0246-888f-4b4a-8bd5-4eddd4fd7dd6 is in state SUCCESS 2025-08-29 15:13:36.723141 | orchestrator | 2025-08-29 15:13:36.723193 | orchestrator | 2025-08-29 15:13:36.723201 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 15:13:36.723209 | orchestrator | 2025-08-29 15:13:36.723216 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 15:13:36.723222 | orchestrator | Friday 29 August 2025 15:11:40 +0000 (0:00:00.209) 0:00:00.209 ********* 2025-08-29 15:13:36.723229 | orchestrator | changed: [localhost] 2025-08-29 15:13:36.723236 | orchestrator | 2025-08-29 15:13:36.723242 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 15:13:36.723248 | orchestrator | Friday 29 August 2025 15:11:43 +0000 (0:00:02.520) 0:00:02.730 ********* 2025-08-29 15:13:36.723254 | orchestrator | changed: [localhost] 2025-08-29 15:13:36.723261 | orchestrator | 2025-08-29 15:13:36.723267 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 15:13:36.723273 | orchestrator | Friday 29 August 2025 15:12:25 +0000 (0:00:42.474) 0:00:45.204 ********* 2025-08-29 15:13:36.723279 | orchestrator | changed: [localhost] 2025-08-29 15:13:36.723285 | orchestrator | 2025-08-29 15:13:36.723291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:36.723297 | orchestrator | 2025-08-29 15:13:36.723304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:36.723310 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:09.008) 0:00:54.213 ********* 2025-08-29 15:13:36.723336 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:36.723343 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:36.723349 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:36.723355 | orchestrator | 2025-08-29 15:13:36.723361 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:36.723368 | orchestrator | Friday 29 August 2025 15:12:35 +0000 (0:00:00.531) 0:00:54.744 ********* 2025-08-29 15:13:36.723374 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 15:13:36.723380 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 15:13:36.723387 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 15:13:36.723393 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 15:13:36.723399 | orchestrator | 2025-08-29 15:13:36.723406 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 15:13:36.723412 | orchestrator | skipping: no hosts matched 2025-08-29 15:13:36.723419 | orchestrator | 2025-08-29 15:13:36.723425 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:36.723432 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:36.723440 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:36.723449 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:36.723456 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:36.723462 | orchestrator | 2025-08-29 15:13:36.723468 | orchestrator | 2025-08-29 15:13:36.723474 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:36.723481 | orchestrator | Friday 29 August 2025 15:12:37 +0000 (0:00:01.879) 0:00:56.623 ********* 2025-08-29 15:13:36.723487 | orchestrator | =============================================================================== 2025-08-29 15:13:36.723493 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 42.47s 2025-08-29 15:13:36.723499 | orchestrator | Download ironic-agent kernel -------------------------------------------- 9.01s 2025-08-29 15:13:36.723505 | orchestrator | Ensure the destination directory exists --------------------------------- 2.52s 2025-08-29 15:13:36.723511 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.88s 2025-08-29 15:13:36.723517 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-08-29 15:13:36.723523 | orchestrator | 2025-08-29 15:13:36.723530 | orchestrator | 2025-08-29 15:13:36.723536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:36.723542 | orchestrator | 2025-08-29 15:13:36.723548 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:36.723554 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:00.296) 0:00:00.296 ********* 2025-08-29 15:13:36.723562 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:36.723572 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:36.723674 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:36.723694 | orchestrator | 2025-08-29 15:13:36.723705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:36.723716 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:00.361) 0:00:00.658 ********* 2025-08-29 15:13:36.723727 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 15:13:36.723739 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 15:13:36.723750 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 15:13:36.723761 | orchestrator | 2025-08-29 15:13:36.723772 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 15:13:36.723794 | orchestrator | 2025-08-29 15:13:36.723805 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:36.723818 | orchestrator | Friday 29 August 2025 15:09:28 +0000 (0:00:01.370) 0:00:02.028 ********* 2025-08-29 15:13:36.723831 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:36.723843 | orchestrator | 2025-08-29 15:13:36.723849 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 15:13:36.723864 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:01.831) 0:00:03.859 ********* 2025-08-29 15:13:36.723882 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 15:13:36.723888 | orchestrator | 2025-08-29 15:13:36.723895 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 15:13:36.723901 | orchestrator | Friday 29 August 2025 15:09:34 +0000 (0:00:04.005) 0:00:07.865 ********* 2025-08-29 15:13:36.723907 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 15:13:36.723913 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 15:13:36.723919 | orchestrator | 2025-08-29 15:13:36.723925 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 15:13:36.723932 | orchestrator | Friday 29 August 2025 15:09:42 +0000 (0:00:07.531) 0:00:15.397 ********* 2025-08-29 15:13:36.723938 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:13:36.723944 | orchestrator | 2025-08-29 15:13:36.723950 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 15:13:36.723956 | orchestrator | Friday 29 August 2025 15:09:45 +0000 (0:00:03.370) 0:00:18.767 ********* 2025-08-29 15:13:36.723962 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:13:36.723968 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 15:13:36.724006 | orchestrator | 2025-08-29 15:13:36.724013 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 15:13:36.724073 | orchestrator | Friday 29 August 2025 15:09:49 +0000 (0:00:04.361) 0:00:23.129 ********* 2025-08-29 15:13:36.724079 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:13:36.724085 | orchestrator | 2025-08-29 15:13:36.724091 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 15:13:36.724098 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:03.421) 0:00:26.550 ********* 2025-08-29 15:13:36.724104 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 15:13:36.724110 | orchestrator | 2025-08-29 15:13:36.724116 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 15:13:36.724122 | orchestrator | Friday 29 August 2025 15:09:57 +0000 (0:00:04.461) 0:00:31.011 ********* 2025-08-29 15:13:36.724131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.724144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.724184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.724192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724324 | orchestrator | 2025-08-29 15:13:36.724345 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 15:13:36.724352 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:00:05.938) 0:00:36.950 ********* 2025-08-29 15:13:36.724358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.724371 | orchestrator | 2025-08-29 15:13:36.724377 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 15:13:36.724384 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:00:00.170) 0:00:37.121 ********* 2025-08-29 15:13:36.724399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.724406 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:36.724412 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:36.724418 | orchestrator | 2025-08-29 15:13:36.724424 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:36.724447 | orchestrator | Friday 29 August 2025 15:10:04 +0000 (0:00:00.329) 0:00:37.451 ********* 2025-08-29 15:13:36.724453 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:36.724460 | orchestrator | 2025-08-29 15:13:36.724466 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 15:13:36.724472 | orchestrator | Friday 29 August 2025 15:10:05 +0000 (0:00:01.760) 0:00:39.211 ********* 2025-08-29 15:13:36.724479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.724501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.724518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.724525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.724695 | orchestrator | 2025-08-29 15:13:36.724702 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 15:13:36.724708 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:11.235) 0:00:50.447 ********* 2025-08-29 15:13:36.724715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.724721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.725148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.725236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.725250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725298 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.725305 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:36.725311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.725318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.725324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725361 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:36.725367 | orchestrator | 2025-08-29 15:13:36.725374 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 15:13:36.725380 | orchestrator | Friday 29 August 2025 15:10:19 +0000 (0:00:02.809) 0:00:53.256 ********* 2025-08-29 15:13:36.725387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.725393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.725400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725441 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.725447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.725454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.725461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725498 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:36.725504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.725511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.725517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.725555 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:36.725561 | orchestrator | 2025-08-29 15:13:36.725568 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 15:13:36.725574 | orchestrator | Friday 29 August 2025 15:10:24 +0000 (0:00:04.963) 0:00:58.219 ********* 2025-08-29 15:13:36.725580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.725587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.725608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.725615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725763 | orchestrator | 2025-08-29 15:13:36.725769 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 15:13:36.725776 | orchestrator | Friday 29 August 2025 15:10:34 +0000 (0:00:09.196) 0:01:07.416 ********* 2025-08-29 15:13:36.725784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.725791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.725813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.725821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.725948 | orchestrator | 2025-08-29 15:13:36.725955 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 15:13:36.725961 | orchestrator | Friday 29 August 2025 15:11:09 +0000 (0:00:35.078) 0:01:42.495 ********* 2025-08-29 15:13:36.725967 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:13:36.725974 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:13:36.725980 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:13:36.725986 | orchestrator | 2025-08-29 15:13:36.725993 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 15:13:36.725999 | orchestrator | Friday 29 August 2025 15:11:19 +0000 (0:00:10.151) 0:01:52.646 ********* 2025-08-29 15:13:36.726005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:13:36.726011 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:13:36.726062 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:13:36.726074 | orchestrator | 2025-08-29 15:13:36.726080 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 15:13:36.726087 | orchestrator | Friday 29 August 2025 15:11:25 +0000 (0:00:06.217) 0:01:58.864 ********* 2025-08-29 15:13:36.726093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726303 | orchestrator | 2025-08-29 15:13:36.726309 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 15:13:36.726315 | orchestrator | Friday 29 August 2025 15:11:30 +0000 (0:00:04.626) 0:02:03.490 ********* 2025-08-29 15:13:36.726322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'de2025-08-29 15:13:36 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:36.726402 | orchestrator | signate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726518 | orchestrator | 2025-08-29 15:13:36.726524 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:36.726530 | orchestrator | Friday 29 August 2025 15:11:34 +0000 (0:00:04.374) 0:02:07.865 ********* 2025-08-29 15:13:36.726537 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.726543 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:36.726549 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:36.726555 | orchestrator | 2025-08-29 15:13:36.726562 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 15:13:36.726568 | orchestrator | Friday 29 August 2025 15:11:37 +0000 (0:00:02.557) 0:02:10.423 ********* 2025-08-29 15:13:36.726574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.726594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726625 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.726632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.726652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726686 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:36.726692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:36.726698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:36.726712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:36.726746 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:36.726752 | orchestrator | 2025-08-29 15:13:36.726758 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 15:13:36.726764 | orchestrator | Friday 29 August 2025 15:11:40 +0000 (0:00:03.071) 0:02:13.495 ********* 2025-08-29 15:13:36.726771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.726784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.726797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:36.726804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:36.726946 | orchestrator | 2025-08-29 15:13:36.726953 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:36.726959 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:06.516) 0:02:20.012 ********* 2025-08-29 15:13:36.726965 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:36.726971 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:36.726978 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:36.726983 | orchestrator | 2025-08-29 15:13:36.726990 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 15:13:36.726996 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:00.635) 0:02:20.648 ********* 2025-08-29 15:13:36.727003 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 15:13:36.727009 | orchestrator | 2025-08-29 15:13:36.727015 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 15:13:36.727021 | orchestrator | Friday 29 August 2025 15:11:49 +0000 (0:00:02.434) 0:02:23.082 ********* 2025-08-29 15:13:36.727028 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:13:36.727034 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 15:13:36.727040 | orchestrator | 2025-08-29 15:13:36.727046 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 15:13:36.727052 | orchestrator | Friday 29 August 2025 15:11:55 +0000 (0:00:05.475) 0:02:28.558 ********* 2025-08-29 15:13:36.727059 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727069 | orchestrator | 2025-08-29 15:13:36.727076 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:13:36.727082 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:16.802) 0:02:45.361 ********* 2025-08-29 15:13:36.727088 | orchestrator | 2025-08-29 15:13:36.727094 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:13:36.727101 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:00.132) 0:02:45.493 ********* 2025-08-29 15:13:36.727107 | orchestrator | 2025-08-29 15:13:36.727113 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:13:36.727120 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:00.111) 0:02:45.605 ********* 2025-08-29 15:13:36.727126 | orchestrator | 2025-08-29 15:13:36.727132 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 15:13:36.727138 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:00.143) 0:02:45.749 ********* 2025-08-29 15:13:36.727145 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:36.727151 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727173 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:36.727180 | orchestrator | 2025-08-29 15:13:36.727186 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 15:13:36.727193 | orchestrator | Friday 29 August 2025 15:12:27 +0000 (0:00:15.292) 0:03:01.041 ********* 2025-08-29 15:13:36.727199 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:36.727205 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727225 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:36.727237 | orchestrator | 2025-08-29 15:13:36.727246 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 15:13:36.727256 | orchestrator | Friday 29 August 2025 15:12:42 +0000 (0:00:14.402) 0:03:15.443 ********* 2025-08-29 15:13:36.727266 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:36.727275 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:36.727285 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727294 | orchestrator | 2025-08-29 15:13:36.727304 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 15:13:36.727313 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:12.977) 0:03:28.421 ********* 2025-08-29 15:13:36.727323 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727333 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:36.727343 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:36.727352 | orchestrator | 2025-08-29 15:13:36.727361 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 15:13:36.727370 | orchestrator | Friday 29 August 2025 15:13:05 +0000 (0:00:10.203) 0:03:38.625 ********* 2025-08-29 15:13:36.727380 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727390 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:36.727400 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:36.727410 | orchestrator | 2025-08-29 15:13:36.727419 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 15:13:36.727425 | orchestrator | Friday 29 August 2025 15:13:18 +0000 (0:00:13.322) 0:03:51.947 ********* 2025-08-29 15:13:36.727431 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727437 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:36.727443 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:36.727450 | orchestrator | 2025-08-29 15:13:36.727456 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 15:13:36.727462 | orchestrator | Friday 29 August 2025 15:13:27 +0000 (0:00:09.023) 0:04:00.971 ********* 2025-08-29 15:13:36.727468 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:36.727474 | orchestrator | 2025-08-29 15:13:36.727480 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:36.727487 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:13:36.727502 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:13:36.727509 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:13:36.727515 | orchestrator | 2025-08-29 15:13:36.727521 | orchestrator | 2025-08-29 15:13:36.727527 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:36.727534 | orchestrator | Friday 29 August 2025 15:13:35 +0000 (0:00:07.814) 0:04:08.785 ********* 2025-08-29 15:13:36.727540 | orchestrator | =============================================================================== 2025-08-29 15:13:36.727546 | orchestrator | designate : Copying over designate.conf -------------------------------- 35.08s 2025-08-29 15:13:36.727552 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.80s 2025-08-29 15:13:36.727558 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.29s 2025-08-29 15:13:36.727564 | orchestrator | designate : Restart designate-api container ---------------------------- 14.40s 2025-08-29 15:13:36.727574 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.32s 2025-08-29 15:13:36.727583 | orchestrator | designate : Restart designate-central container ------------------------ 12.98s 2025-08-29 15:13:36.727592 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ----- 11.24s 2025-08-29 15:13:36.727602 | orchestrator | designate : Restart designate-producer container ----------------------- 10.20s 2025-08-29 15:13:36.727611 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.15s 2025-08-29 15:13:36.727621 | orchestrator | designate : Copying over config.json files for services ----------------- 9.20s 2025-08-29 15:13:36.727630 | orchestrator | designate : Restart designate-worker container -------------------------- 9.02s 2025-08-29 15:13:36.727640 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.81s 2025-08-29 15:13:36.727649 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.53s 2025-08-29 15:13:36.727659 | orchestrator | designate : Check designate containers ---------------------------------- 6.52s 2025-08-29 15:13:36.727668 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.22s 2025-08-29 15:13:36.727677 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.94s 2025-08-29 15:13:36.727687 | orchestrator | designate : Creating Designate databases user and setting permissions --- 5.48s 2025-08-29 15:13:36.727696 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 4.96s 2025-08-29 15:13:36.727706 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.63s 2025-08-29 15:13:36.727715 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.46s 2025-08-29 15:13:36.727725 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:36.727735 | orchestrator | 2025-08-29 15:13:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:39.770088 | orchestrator | 2025-08-29 15:13:39 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:39.771721 | orchestrator | 2025-08-29 15:13:39 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:39.773292 | orchestrator | 2025-08-29 15:13:39 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:39.775559 | orchestrator | 2025-08-29 15:13:39 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:39.775612 | orchestrator | 2025-08-29 15:13:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:42.836391 | orchestrator | 2025-08-29 15:13:42 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:42.837179 | orchestrator | 2025-08-29 15:13:42 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:42.838843 | orchestrator | 2025-08-29 15:13:42 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:42.838881 | orchestrator | 2025-08-29 15:13:42 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:42.838890 | orchestrator | 2025-08-29 15:13:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:45.921217 | orchestrator | 2025-08-29 15:13:45 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:45.923193 | orchestrator | 2025-08-29 15:13:45 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:45.924203 | orchestrator | 2025-08-29 15:13:45 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:45.925450 | orchestrator | 2025-08-29 15:13:45 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:45.925474 | orchestrator | 2025-08-29 15:13:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:48.956969 | orchestrator | 2025-08-29 15:13:48 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:48.957409 | orchestrator | 2025-08-29 15:13:48 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:48.958216 | orchestrator | 2025-08-29 15:13:48 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:48.959115 | orchestrator | 2025-08-29 15:13:48 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:48.959132 | orchestrator | 2025-08-29 15:13:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:52.011507 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:52.012262 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:52.013200 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:52.014122 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:52.014167 | orchestrator | 2025-08-29 15:13:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:55.058893 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:55.060097 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:55.061523 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:55.062645 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:55.062686 | orchestrator | 2025-08-29 15:13:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:58.110734 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:13:58.111917 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:13:58.113593 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:13:58.115090 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:13:58.115161 | orchestrator | 2025-08-29 15:13:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:01.155730 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:01.158986 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:01.160162 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:14:01.161336 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:01.161361 | orchestrator | 2025-08-29 15:14:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:04.190414 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:04.192033 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:04.193149 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state STARTED 2025-08-29 15:14:04.194342 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:04.194379 | orchestrator | 2025-08-29 15:14:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:07.270445 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:07.270510 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:07.270518 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task 7cbd92a3-6203-43ea-8445-864800742001 is in state SUCCESS 2025-08-29 15:14:07.270524 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:07.270530 | orchestrator | 2025-08-29 15:14:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:07.271151 | orchestrator | 2025-08-29 15:14:07.271187 | orchestrator | 2025-08-29 15:14:07.271196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:14:07.271203 | orchestrator | 2025-08-29 15:14:07.271210 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:14:07.271217 | orchestrator | Friday 29 August 2025 15:12:52 +0000 (0:00:00.745) 0:00:00.745 ********* 2025-08-29 15:14:07.271224 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:07.271232 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:07.271239 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:07.271247 | orchestrator | 2025-08-29 15:14:07.271253 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:14:07.271260 | orchestrator | Friday 29 August 2025 15:12:53 +0000 (0:00:01.097) 0:00:01.842 ********* 2025-08-29 15:14:07.271268 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 15:14:07.271276 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 15:14:07.271284 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 15:14:07.271291 | orchestrator | 2025-08-29 15:14:07.271299 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 15:14:07.271306 | orchestrator | 2025-08-29 15:14:07.271313 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:14:07.271320 | orchestrator | Friday 29 August 2025 15:12:54 +0000 (0:00:00.860) 0:00:02.703 ********* 2025-08-29 15:14:07.271328 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:07.271336 | orchestrator | 2025-08-29 15:14:07.271343 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 15:14:07.271349 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:01.440) 0:00:04.143 ********* 2025-08-29 15:14:07.271367 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 15:14:07.271374 | orchestrator | 2025-08-29 15:14:07.271380 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 15:14:07.271385 | orchestrator | Friday 29 August 2025 15:12:59 +0000 (0:00:04.034) 0:00:08.177 ********* 2025-08-29 15:14:07.271392 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 15:14:07.271399 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 15:14:07.271407 | orchestrator | 2025-08-29 15:14:07.271414 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 15:14:07.271421 | orchestrator | Friday 29 August 2025 15:13:06 +0000 (0:00:07.082) 0:00:15.260 ********* 2025-08-29 15:14:07.271429 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:14:07.271436 | orchestrator | 2025-08-29 15:14:07.271444 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 15:14:07.271452 | orchestrator | Friday 29 August 2025 15:13:10 +0000 (0:00:03.621) 0:00:18.881 ********* 2025-08-29 15:14:07.271460 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:14:07.271468 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 15:14:07.271476 | orchestrator | 2025-08-29 15:14:07.271483 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 15:14:07.271490 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:03.979) 0:00:22.861 ********* 2025-08-29 15:14:07.271498 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:14:07.271506 | orchestrator | 2025-08-29 15:14:07.271514 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 15:14:07.271521 | orchestrator | Friday 29 August 2025 15:13:17 +0000 (0:00:03.398) 0:00:26.259 ********* 2025-08-29 15:14:07.271537 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 15:14:07.271545 | orchestrator | 2025-08-29 15:14:07.271552 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:14:07.271559 | orchestrator | Friday 29 August 2025 15:13:22 +0000 (0:00:04.897) 0:00:31.156 ********* 2025-08-29 15:14:07.271566 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:07.271573 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:07.271580 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:07.271586 | orchestrator | 2025-08-29 15:14:07.271594 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 15:14:07.271602 | orchestrator | Friday 29 August 2025 15:13:23 +0000 (0:00:00.459) 0:00:31.615 ********* 2025-08-29 15:14:07.271611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.271632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.271645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.271653 | orchestrator | 2025-08-29 15:14:07.271660 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 15:14:07.271667 | orchestrator | Friday 29 August 2025 15:13:24 +0000 (0:00:01.058) 0:00:32.674 ********* 2025-08-29 15:14:07.271674 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:07.271681 | orchestrator | 2025-08-29 15:14:07.271688 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 15:14:07.271695 | orchestrator | Friday 29 August 2025 15:13:24 +0000 (0:00:00.126) 0:00:32.801 ********* 2025-08-29 15:14:07.271702 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:07.271709 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:07.271717 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:07.271725 | orchestrator | 2025-08-29 15:14:07.271732 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:14:07.271740 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.523) 0:00:33.325 ********* 2025-08-29 15:14:07.271749 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:07.271758 | orchestrator | 2025-08-29 15:14:07.271767 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 15:14:07.271784 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.548) 0:00:33.873 ********* 2025-08-29 15:14:07.271794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.271809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.271824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.271833 | orchestrator | 2025-08-29 15:14:07.271842 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 15:14:07.271851 | orchestrator | Friday 29 August 2025 15:13:27 +0000 (0:00:01.536) 0:00:35.410 ********* 2025-08-29 15:14:07.271861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.271871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:07.271883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.271890 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:07.271901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.271912 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:07.271921 | orchestrator | 2025-08-29 15:14:07.271929 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 15:14:07.271938 | orchestrator | Friday 29 August 2025 15:13:28 +0000 (0:00:00.990) 0:00:36.400 ********* 2025-08-29 15:14:07.271947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.271955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:07.271963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.271972 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:07.271982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.271993 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:07.272000 | orchestrator | 2025-08-29 15:14:07.272007 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 15:14:07.272014 | orchestrator | Friday 29 August 2025 15:13:28 +0000 (0:00:00.850) 0:00:37.250 ********* 2025-08-29 15:14:07.272027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272049 | orchestrator | 2025-08-29 15:14:07.272055 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 15:14:07.272062 | orchestrator | Friday 29 August 2025 15:13:30 +0000 (0:00:01.559) 0:00:38.810 ********* 2025-08-29 15:14:07.272072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272102 | orchestrator | 2025-08-29 15:14:07.272109 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 15:14:07.272116 | orchestrator | Friday 29 August 2025 15:13:33 +0000 (0:00:02.680) 0:00:41.490 ********* 2025-08-29 15:14:07.272123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:14:07.272147 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:14:07.272155 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:14:07.272163 | orchestrator | 2025-08-29 15:14:07.272169 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 15:14:07.272177 | orchestrator | Friday 29 August 2025 15:13:34 +0000 (0:00:01.537) 0:00:43.028 ********* 2025-08-29 15:14:07.272184 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:07.272192 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:07.272200 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:07.272207 | orchestrator | 2025-08-29 15:14:07.272215 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 15:14:07.272222 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:01.529) 0:00:44.558 ********* 2025-08-29 15:14:07.272233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.272245 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:07.272253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.272260 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:07.272272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:07.272280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:07.272288 | orchestrator | 2025-08-29 15:14:07.272296 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 15:14:07.272302 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:00.702) 0:00:45.260 ********* 2025-08-29 15:14:07.272309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:07.272346 | orchestrator | 2025-08-29 15:14:07.272354 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 15:14:07.272361 | orchestrator | Friday 29 August 2025 15:13:39 +0000 (0:00:02.320) 0:00:47.580 ********* 2025-08-29 15:14:07.272368 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:07.272376 | orchestrator | 2025-08-29 15:14:07.272383 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 15:14:07.272391 | orchestrator | Friday 29 August 2025 15:13:41 +0000 (0:00:02.464) 0:00:50.045 ********* 2025-08-29 15:14:07.272397 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:07.272405 | orchestrator | 2025-08-29 15:14:07.272413 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 15:14:07.272420 | orchestrator | Friday 29 August 2025 15:13:44 +0000 (0:00:02.665) 0:00:52.711 ********* 2025-08-29 15:14:07.272431 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:07.272439 | orchestrator | 2025-08-29 15:14:07.272446 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:14:07.272453 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:15.075) 0:01:07.787 ********* 2025-08-29 15:14:07.272460 | orchestrator | 2025-08-29 15:14:07.272467 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:14:07.272475 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:00.127) 0:01:07.914 ********* 2025-08-29 15:14:07.272482 | orchestrator | 2025-08-29 15:14:07.272489 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:14:07.272501 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:00.101) 0:01:08.016 ********* 2025-08-29 15:14:07.272508 | orchestrator | 2025-08-29 15:14:07.272515 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 15:14:07.272523 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:00.093) 0:01:08.110 ********* 2025-08-29 15:14:07.272531 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:07.272539 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:07.272547 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:07.272555 | orchestrator | 2025-08-29 15:14:07.272562 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:14:07.272570 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:14:07.272579 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:14:07.272593 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:14:07.272600 | orchestrator | 2025-08-29 15:14:07.272608 | orchestrator | 2025-08-29 15:14:07.272616 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:14:07.272624 | orchestrator | Friday 29 August 2025 15:14:05 +0000 (0:00:06.081) 0:01:14.191 ********* 2025-08-29 15:14:07.272631 | orchestrator | =============================================================================== 2025-08-29 15:14:07.272640 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.08s 2025-08-29 15:14:07.272648 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.08s 2025-08-29 15:14:07.272656 | orchestrator | placement : Restart placement-api container ----------------------------- 6.08s 2025-08-29 15:14:07.272663 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.90s 2025-08-29 15:14:07.272669 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.03s 2025-08-29 15:14:07.272677 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.98s 2025-08-29 15:14:07.272684 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.62s 2025-08-29 15:14:07.272692 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.40s 2025-08-29 15:14:07.272699 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.68s 2025-08-29 15:14:07.272706 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.67s 2025-08-29 15:14:07.272713 | orchestrator | placement : Creating placement databases -------------------------------- 2.47s 2025-08-29 15:14:07.272720 | orchestrator | placement : Check placement containers ---------------------------------- 2.32s 2025-08-29 15:14:07.272730 | orchestrator | placement : Copying over config.json files for services ----------------- 1.56s 2025-08-29 15:14:07.272738 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.54s 2025-08-29 15:14:07.272746 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.54s 2025-08-29 15:14:07.272754 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.53s 2025-08-29 15:14:07.272761 | orchestrator | placement : include_tasks ----------------------------------------------- 1.44s 2025-08-29 15:14:07.272769 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-08-29 15:14:07.272777 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.06s 2025-08-29 15:14:07.272784 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.99s 2025-08-29 15:14:10.284523 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:10.289678 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:10.291298 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:10.292420 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:10.292473 | orchestrator | 2025-08-29 15:14:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:13.347538 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:13.352970 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:13.360182 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:13.363774 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:13.364783 | orchestrator | 2025-08-29 15:14:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:16.441392 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:16.447212 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:16.450329 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:16.456293 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:16.456686 | orchestrator | 2025-08-29 15:14:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:19.496563 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:19.497594 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:19.498847 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:19.501062 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:19.501202 | orchestrator | 2025-08-29 15:14:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:22.539183 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:22.540762 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:22.542809 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:22.544203 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:22.544261 | orchestrator | 2025-08-29 15:14:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:25.590476 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:25.592744 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:25.594529 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:25.596804 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:25.596853 | orchestrator | 2025-08-29 15:14:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:28.635445 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:28.635539 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:28.637646 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:28.639946 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:28.640017 | orchestrator | 2025-08-29 15:14:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:31.687688 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:31.690487 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:31.693222 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:31.695905 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:31.696593 | orchestrator | 2025-08-29 15:14:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:34.736974 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:34.737866 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:34.738901 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:34.742177 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:34.742221 | orchestrator | 2025-08-29 15:14:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:37.781901 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:37.783361 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:37.785694 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:37.786957 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:37.786996 | orchestrator | 2025-08-29 15:14:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:40.860147 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:40.860879 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:40.861741 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:40.863708 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:40.863923 | orchestrator | 2025-08-29 15:14:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:43.912838 | orchestrator | 2025-08-29 15:14:43 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:43.919200 | orchestrator | 2025-08-29 15:14:43 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:43.920921 | orchestrator | 2025-08-29 15:14:43 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:43.922339 | orchestrator | 2025-08-29 15:14:43 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:43.922386 | orchestrator | 2025-08-29 15:14:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:46.962709 | orchestrator | 2025-08-29 15:14:46 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:46.964276 | orchestrator | 2025-08-29 15:14:46 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:46.965564 | orchestrator | 2025-08-29 15:14:46 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:46.968992 | orchestrator | 2025-08-29 15:14:46 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state STARTED 2025-08-29 15:14:46.969074 | orchestrator | 2025-08-29 15:14:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:50.022388 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:50.024180 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:50.029606 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task 9ecba729-77e8-4c7e-86ac-65c38644001e is in state STARTED 2025-08-29 15:14:50.029693 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:50.035995 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task 5d01e064-4f22-4946-9fea-d3e4c09163ef is in state SUCCESS 2025-08-29 15:14:50.038279 | orchestrator | 2025-08-29 15:14:50.038532 | orchestrator | 2025-08-29 15:14:50.038554 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:14:50.038562 | orchestrator | 2025-08-29 15:14:50.038569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:14:50.038746 | orchestrator | Friday 29 August 2025 15:08:32 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-08-29 15:14:50.038764 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:50.038772 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:50.038779 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:50.038787 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:14:50.038794 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:14:50.038801 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:14:50.038808 | orchestrator | 2025-08-29 15:14:50.038816 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:14:50.038824 | orchestrator | Friday 29 August 2025 15:08:33 +0000 (0:00:00.723) 0:00:00.979 ********* 2025-08-29 15:14:50.038831 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 15:14:50.038839 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 15:14:50.038845 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 15:14:50.038852 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 15:14:50.038859 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 15:14:50.038866 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 15:14:50.038873 | orchestrator | 2025-08-29 15:14:50.038881 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 15:14:50.038888 | orchestrator | 2025-08-29 15:14:50.038895 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:14:50.038902 | orchestrator | Friday 29 August 2025 15:08:34 +0000 (0:00:00.635) 0:00:01.615 ********* 2025-08-29 15:14:50.038910 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:14:50.038918 | orchestrator | 2025-08-29 15:14:50.038925 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 15:14:50.038933 | orchestrator | Friday 29 August 2025 15:08:35 +0000 (0:00:01.184) 0:00:02.800 ********* 2025-08-29 15:14:50.038940 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:50.038947 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:50.038956 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:50.038964 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:14:50.038971 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:14:50.038978 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:14:50.038985 | orchestrator | 2025-08-29 15:14:50.038993 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 15:14:50.039000 | orchestrator | Friday 29 August 2025 15:08:36 +0000 (0:00:01.193) 0:00:03.993 ********* 2025-08-29 15:14:50.039008 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:50.039016 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:50.039023 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:50.039031 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:14:50.039038 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:14:50.039046 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:14:50.039112 | orchestrator | 2025-08-29 15:14:50.039120 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 15:14:50.039126 | orchestrator | Friday 29 August 2025 15:08:37 +0000 (0:00:01.123) 0:00:05.117 ********* 2025-08-29 15:14:50.039133 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 15:14:50.039140 | orchestrator |  "changed": false, 2025-08-29 15:14:50.039146 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:14:50.039151 | orchestrator | } 2025-08-29 15:14:50.039157 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 15:14:50.039163 | orchestrator |  "changed": false, 2025-08-29 15:14:50.039169 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:14:50.039175 | orchestrator | } 2025-08-29 15:14:50.039181 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 15:14:50.039187 | orchestrator |  "changed": false, 2025-08-29 15:14:50.039193 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:14:50.039200 | orchestrator | } 2025-08-29 15:14:50.039206 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 15:14:50.039212 | orchestrator |  "changed": false, 2025-08-29 15:14:50.039218 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:14:50.039224 | orchestrator | } 2025-08-29 15:14:50.039231 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 15:14:50.039237 | orchestrator |  "changed": false, 2025-08-29 15:14:50.039244 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:14:50.039250 | orchestrator | } 2025-08-29 15:14:50.039256 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 15:14:50.039263 | orchestrator |  "changed": false, 2025-08-29 15:14:50.039269 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:14:50.039276 | orchestrator | } 2025-08-29 15:14:50.039281 | orchestrator | 2025-08-29 15:14:50.039288 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 15:14:50.039294 | orchestrator | Friday 29 August 2025 15:08:38 +0000 (0:00:01.262) 0:00:06.379 ********* 2025-08-29 15:14:50.039300 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.039306 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.039312 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.039318 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.039324 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.039331 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.039337 | orchestrator | 2025-08-29 15:14:50.039344 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 15:14:50.039351 | orchestrator | Friday 29 August 2025 15:08:39 +0000 (0:00:00.866) 0:00:07.246 ********* 2025-08-29 15:14:50.039372 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 15:14:50.039380 | orchestrator | 2025-08-29 15:14:50.039387 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 15:14:50.039394 | orchestrator | Friday 29 August 2025 15:08:42 +0000 (0:00:03.254) 0:00:10.500 ********* 2025-08-29 15:14:50.039401 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 15:14:50.039409 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 15:14:50.039415 | orchestrator | 2025-08-29 15:14:50.039454 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 15:14:50.039462 | orchestrator | Friday 29 August 2025 15:08:49 +0000 (0:00:06.132) 0:00:16.633 ********* 2025-08-29 15:14:50.039468 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:14:50.039474 | orchestrator | 2025-08-29 15:14:50.039480 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 15:14:50.039486 | orchestrator | Friday 29 August 2025 15:08:52 +0000 (0:00:03.359) 0:00:19.993 ********* 2025-08-29 15:14:50.039492 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:14:50.039499 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 15:14:50.039505 | orchestrator | 2025-08-29 15:14:50.039512 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 15:14:50.039529 | orchestrator | Friday 29 August 2025 15:08:56 +0000 (0:00:04.005) 0:00:23.998 ********* 2025-08-29 15:14:50.039536 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:14:50.039542 | orchestrator | 2025-08-29 15:14:50.039548 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 15:14:50.039555 | orchestrator | Friday 29 August 2025 15:08:59 +0000 (0:00:03.488) 0:00:27.487 ********* 2025-08-29 15:14:50.039561 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 15:14:50.039568 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 15:14:50.039574 | orchestrator | 2025-08-29 15:14:50.039581 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:14:50.039588 | orchestrator | Friday 29 August 2025 15:09:07 +0000 (0:00:08.033) 0:00:35.521 ********* 2025-08-29 15:14:50.039594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.039601 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.039606 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.039612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.039617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.039624 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.039631 | orchestrator | 2025-08-29 15:14:50.039638 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 15:14:50.039646 | orchestrator | Friday 29 August 2025 15:09:09 +0000 (0:00:01.488) 0:00:37.010 ********* 2025-08-29 15:14:50.039652 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.039658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.039665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.039671 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.039679 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.039685 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.039692 | orchestrator | 2025-08-29 15:14:50.039699 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 15:14:50.039706 | orchestrator | Friday 29 August 2025 15:09:13 +0000 (0:00:03.949) 0:00:40.959 ********* 2025-08-29 15:14:50.039712 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:50.039718 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:50.039724 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:50.039730 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:14:50.039736 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:14:50.039742 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:14:50.039748 | orchestrator | 2025-08-29 15:14:50.039753 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 15:14:50.039760 | orchestrator | Friday 29 August 2025 15:09:14 +0000 (0:00:01.608) 0:00:42.567 ********* 2025-08-29 15:14:50.039766 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.039773 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.039779 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.039786 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.039792 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.039798 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.039805 | orchestrator | 2025-08-29 15:14:50.039811 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 15:14:50.039818 | orchestrator | Friday 29 August 2025 15:09:18 +0000 (0:00:03.760) 0:00:46.328 ********* 2025-08-29 15:14:50.039829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.039872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.039883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.039890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.039897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.039904 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.039917 | orchestrator | 2025-08-29 15:14:50.039928 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 15:14:50.039935 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:05.305) 0:00:51.634 ********* 2025-08-29 15:14:50.039942 | orchestrator | [WARNING]: Skipped 2025-08-29 15:14:50.039949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 15:14:50.039957 | orchestrator | due to this access issue: 2025-08-29 15:14:50.039963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 15:14:50.039970 | orchestrator | a directory 2025-08-29 15:14:50.039976 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:14:50.039983 | orchestrator | 2025-08-29 15:14:50.039996 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:14:50.040003 | orchestrator | Friday 29 August 2025 15:09:25 +0000 (0:00:01.560) 0:00:53.194 ********* 2025-08-29 15:14:50.040011 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:14:50.040019 | orchestrator | 2025-08-29 15:14:50.040026 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 15:14:50.040032 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:01.538) 0:00:54.733 ********* 2025-08-29 15:14:50.040039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040131 | orchestrator | 2025-08-29 15:14:50.040137 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 15:14:50.040143 | orchestrator | Friday 29 August 2025 15:09:32 +0000 (0:00:05.176) 0:00:59.910 ********* 2025-08-29 15:14:50.040150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040162 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.040168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040174 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.040190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040209 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.040215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040222 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040238 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040245 | orchestrator | 2025-08-29 15:14:50.040251 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 15:14:50.040257 | orchestrator | Friday 29 August 2025 15:09:36 +0000 (0:00:04.275) 0:01:04.185 ********* 2025-08-29 15:14:50.040266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040273 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.040285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040291 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.040298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040304 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.040311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040323 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040337 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040360 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040367 | orchestrator | 2025-08-29 15:14:50.040374 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 15:14:50.040386 | orchestrator | Friday 29 August 2025 15:09:40 +0000 (0:00:03.758) 0:01:07.944 ********* 2025-08-29 15:14:50.040393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.040400 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.040407 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.040413 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040420 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040427 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040434 | orchestrator | 2025-08-29 15:14:50.040440 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 15:14:50.040447 | orchestrator | Friday 29 August 2025 15:09:43 +0000 (0:00:03.105) 0:01:11.049 ********* 2025-08-29 15:14:50.040452 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.040459 | orchestrator | 2025-08-29 15:14:50.040465 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 15:14:50.040472 | orchestrator | Friday 29 August 2025 15:09:43 +0000 (0:00:00.156) 0:01:11.206 ********* 2025-08-29 15:14:50.040478 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.040484 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.040491 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.040496 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040502 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040514 | orchestrator | 2025-08-29 15:14:50.040521 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 15:14:50.040534 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:01.128) 0:01:12.335 ********* 2025-08-29 15:14:50.040541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040548 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.040555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040562 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.040573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.040579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.040592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040599 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040631 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040637 | orchestrator | 2025-08-29 15:14:50.040644 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 15:14:50.040650 | orchestrator | Friday 29 August 2025 15:09:48 +0000 (0:00:03.427) 0:01:15.763 ********* 2025-08-29 15:14:50.040657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040719 | orchestrator | 2025-08-29 15:14:50.040726 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 15:14:50.040732 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:05.549) 0:01:21.313 ********* 2025-08-29 15:14:50.040746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.040798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040811 | orchestrator | 2025-08-29 15:14:50.040818 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 15:14:50.040824 | orchestrator | Friday 29 August 2025 15:10:04 +0000 (0:00:10.717) 0:01:32.030 ********* 2025-08-29 15:14:50.040831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040844 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040857 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040876 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.040909 | orchestrator | 2025-08-29 15:14:50.040916 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 15:14:50.040922 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:07.095) 0:01:39.126 ********* 2025-08-29 15:14:50.040929 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.040936 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.040942 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:50.040949 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.040955 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:50.040962 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:50.040969 | orchestrator | 2025-08-29 15:14:50.040975 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 15:14:50.040981 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:06.009) 0:01:45.135 ********* 2025-08-29 15:14:50.040989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.040996 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041015 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.041049 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.041063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.041071 | orchestrator | 2025-08-29 15:14:50.041077 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 15:14:50.041114 | orchestrator | Friday 29 August 2025 15:10:27 +0000 (0:00:09.589) 0:01:54.725 ********* 2025-08-29 15:14:50.041122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041130 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041136 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041142 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041149 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041156 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041163 | orchestrator | 2025-08-29 15:14:50.041169 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 15:14:50.041176 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:04.739) 0:01:59.464 ********* 2025-08-29 15:14:50.041192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041200 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041207 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041213 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041220 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041227 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041234 | orchestrator | 2025-08-29 15:14:50.041241 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 15:14:50.041248 | orchestrator | Friday 29 August 2025 15:10:39 +0000 (0:00:07.522) 0:02:06.986 ********* 2025-08-29 15:14:50.041256 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041263 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041271 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041285 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041293 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041300 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041307 | orchestrator | 2025-08-29 15:14:50.041315 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 15:14:50.041322 | orchestrator | Friday 29 August 2025 15:10:44 +0000 (0:00:04.890) 0:02:11.877 ********* 2025-08-29 15:14:50.041329 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041335 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041342 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041349 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041356 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041363 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041369 | orchestrator | 2025-08-29 15:14:50.041375 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 15:14:50.041382 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:06.369) 0:02:18.246 ********* 2025-08-29 15:14:50.041389 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041396 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041402 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041408 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041415 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041427 | orchestrator | 2025-08-29 15:14:50.041434 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 15:14:50.041440 | orchestrator | Friday 29 August 2025 15:10:57 +0000 (0:00:06.557) 0:02:24.804 ********* 2025-08-29 15:14:50.041447 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041460 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041466 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041472 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041478 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041483 | orchestrator | 2025-08-29 15:14:50.041489 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 15:14:50.041495 | orchestrator | Friday 29 August 2025 15:11:02 +0000 (0:00:05.641) 0:02:30.445 ********* 2025-08-29 15:14:50.041501 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:14:50.041518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041525 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:14:50.041532 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041538 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:14:50.041544 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041550 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:14:50.041557 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041564 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:14:50.041570 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041577 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:14:50.041584 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041591 | orchestrator | 2025-08-29 15:14:50.041598 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 15:14:50.041604 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:06.039) 0:02:36.485 ********* 2025-08-29 15:14:50.041612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.041621 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.041652 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.041673 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041688 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041702 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041716 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041723 | orchestrator | 2025-08-29 15:14:50.041733 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 15:14:50.041740 | orchestrator | Friday 29 August 2025 15:11:14 +0000 (0:00:05.268) 0:02:41.754 ********* 2025-08-29 15:14:50.041752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.041763 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.041782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.041794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.041809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.041816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041822 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.041832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.041945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.041965 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.041972 | orchestrator | 2025-08-29 15:14:50.041979 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 15:14:50.041986 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:06.797) 0:02:48.551 ********* 2025-08-29 15:14:50.041993 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.041999 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042005 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042011 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042078 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042172 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042186 | orchestrator | 2025-08-29 15:14:50.042193 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 15:14:50.042200 | orchestrator | Friday 29 August 2025 15:11:25 +0000 (0:00:04.725) 0:02:53.276 ********* 2025-08-29 15:14:50.042207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042220 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042226 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:14:50.042233 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:14:50.042240 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:14:50.042247 | orchestrator | 2025-08-29 15:14:50.042253 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-08-29 15:14:50.042260 | orchestrator | Friday 29 August 2025 15:11:34 +0000 (0:00:08.620) 0:03:01.897 ********* 2025-08-29 15:14:50.042267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042273 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042280 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042286 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042293 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042299 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042305 | orchestrator | 2025-08-29 15:14:50.042312 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 15:14:50.042319 | orchestrator | Friday 29 August 2025 15:11:41 +0000 (0:00:07.217) 0:03:09.115 ********* 2025-08-29 15:14:50.042325 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042331 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042338 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042345 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042353 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042359 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042366 | orchestrator | 2025-08-29 15:14:50.042374 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 15:14:50.042381 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:05.567) 0:03:14.683 ********* 2025-08-29 15:14:50.042388 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042395 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042401 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042407 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042414 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042427 | orchestrator | 2025-08-29 15:14:50.042434 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 15:14:50.042440 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:04.710) 0:03:19.394 ********* 2025-08-29 15:14:50.042447 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042464 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042471 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042477 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042483 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042490 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042496 | orchestrator | 2025-08-29 15:14:50.042503 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 15:14:50.042509 | orchestrator | Friday 29 August 2025 15:11:58 +0000 (0:00:07.111) 0:03:26.506 ********* 2025-08-29 15:14:50.042516 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042523 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042529 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042535 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042542 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042549 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042554 | orchestrator | 2025-08-29 15:14:50.042568 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 15:14:50.042574 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:06.683) 0:03:33.190 ********* 2025-08-29 15:14:50.042580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042587 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042593 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042599 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042606 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042617 | orchestrator | 2025-08-29 15:14:50.042622 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 15:14:50.042628 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:07.099) 0:03:40.289 ********* 2025-08-29 15:14:50.042634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042653 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042661 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042674 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042682 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042689 | orchestrator | 2025-08-29 15:14:50.042695 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 15:14:50.042701 | orchestrator | Friday 29 August 2025 15:12:19 +0000 (0:00:06.994) 0:03:47.284 ********* 2025-08-29 15:14:50.042706 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042718 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042730 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042741 | orchestrator | 2025-08-29 15:14:50.042748 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 15:14:50.042754 | orchestrator | Friday 29 August 2025 15:12:26 +0000 (0:00:06.750) 0:03:54.035 ********* 2025-08-29 15:14:50.042760 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:14:50.042768 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:14:50.042775 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042782 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042788 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:14:50.042795 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042802 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:14:50.042809 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042815 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:14:50.042830 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042836 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:14:50.042842 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042849 | orchestrator | 2025-08-29 15:14:50.042855 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 15:14:50.042861 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:07.598) 0:04:01.633 ********* 2025-08-29 15:14:50.042870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.042877 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.042890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.042897 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.042913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:14:50.042921 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.042928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.042943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.042950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.042957 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.042963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:14:50.042970 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.042976 | orchestrator | 2025-08-29 15:14:50.042984 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 15:14:50.042991 | orchestrator | Friday 29 August 2025 15:12:41 +0000 (0:00:07.768) 0:04:09.402 ********* 2025-08-29 15:14:50.043001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.043015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.043026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.043033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:14:50.043039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.043056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:14:50.043063 | orchestrator | 2025-08-29 15:14:50.043070 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:14:50.043080 | orchestrator | Friday 29 August 2025 15:12:48 +0000 (0:00:06.820) 0:04:16.222 ********* 2025-08-29 15:14:50.043117 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:50.043124 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:50.043130 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:50.043136 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:14:50.043142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:14:50.043149 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:14:50.043155 | orchestrator | 2025-08-29 15:14:50.043168 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 15:14:50.043174 | orchestrator | Friday 29 August 2025 15:12:50 +0000 (0:00:01.656) 0:04:17.880 ********* 2025-08-29 15:14:50.043180 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:50.043187 | orchestrator | 2025-08-29 15:14:50.043194 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 15:14:50.043200 | orchestrator | Friday 29 August 2025 15:12:52 +0000 (0:00:02.448) 0:04:20.328 ********* 2025-08-29 15:14:50.043205 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:50.043211 | orchestrator | 2025-08-29 15:14:50.043217 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 15:14:50.043223 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:02.865) 0:04:23.194 ********* 2025-08-29 15:14:50.043229 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:50.043236 | orchestrator | 2025-08-29 15:14:50.043242 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:14:50.043248 | orchestrator | Friday 29 August 2025 15:13:40 +0000 (0:00:45.281) 0:05:08.476 ********* 2025-08-29 15:14:50.043255 | orchestrator | 2025-08-29 15:14:50.043261 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:14:50.043267 | orchestrator | Friday 29 August 2025 15:13:41 +0000 (0:00:00.164) 0:05:08.640 ********* 2025-08-29 15:14:50.043274 | orchestrator | 2025-08-29 15:14:50.043281 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:14:50.043287 | orchestrator | Friday 29 August 2025 15:13:41 +0000 (0:00:00.193) 0:05:08.833 ********* 2025-08-29 15:14:50.043294 | orchestrator | 2025-08-29 15:14:50.043300 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:14:50.043306 | orchestrator | Friday 29 August 2025 15:13:41 +0000 (0:00:00.215) 0:05:09.049 ********* 2025-08-29 15:14:50.043313 | orchestrator | 2025-08-29 15:14:50.043320 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:14:50.043327 | orchestrator | Friday 29 August 2025 15:13:42 +0000 (0:00:01.156) 0:05:10.206 ********* 2025-08-29 15:14:50.043333 | orchestrator | 2025-08-29 15:14:50.043340 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:14:50.043347 | orchestrator | Friday 29 August 2025 15:13:42 +0000 (0:00:00.222) 0:05:10.429 ********* 2025-08-29 15:14:50.043353 | orchestrator | 2025-08-29 15:14:50.043360 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 15:14:50.043367 | orchestrator | Friday 29 August 2025 15:13:43 +0000 (0:00:00.234) 0:05:10.663 ********* 2025-08-29 15:14:50.043374 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:50.043380 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:50.043387 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:50.043393 | orchestrator | 2025-08-29 15:14:50.043400 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 15:14:50.043406 | orchestrator | Friday 29 August 2025 15:14:12 +0000 (0:00:29.026) 0:05:39.689 ********* 2025-08-29 15:14:50.043412 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:14:50.043419 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:14:50.043425 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:14:50.043431 | orchestrator | 2025-08-29 15:14:50.043438 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:14:50.043445 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:14:50.043454 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:14:50.043461 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:14:50.043468 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 15:14:50.043481 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 15:14:50.043487 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 15:14:50.043494 | orchestrator | 2025-08-29 15:14:50.043500 | orchestrator | 2025-08-29 15:14:50.043507 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:14:50.043519 | orchestrator | Friday 29 August 2025 15:14:47 +0000 (0:00:35.545) 0:06:15.235 ********* 2025-08-29 15:14:50.043526 | orchestrator | =============================================================================== 2025-08-29 15:14:50.043533 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.28s 2025-08-29 15:14:50.043540 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 35.55s 2025-08-29 15:14:50.043546 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.03s 2025-08-29 15:14:50.043553 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 10.72s 2025-08-29 15:14:50.043567 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 9.59s 2025-08-29 15:14:50.043574 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 8.62s 2025-08-29 15:14:50.043581 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.03s 2025-08-29 15:14:50.043587 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 7.77s 2025-08-29 15:14:50.043594 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 7.60s 2025-08-29 15:14:50.043601 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 7.52s 2025-08-29 15:14:50.043608 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 7.22s 2025-08-29 15:14:50.043614 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 7.11s 2025-08-29 15:14:50.043621 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 7.10s 2025-08-29 15:14:50.043627 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 7.10s 2025-08-29 15:14:50.043633 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 6.99s 2025-08-29 15:14:50.043640 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.82s 2025-08-29 15:14:50.043647 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 6.80s 2025-08-29 15:14:50.043653 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 6.75s 2025-08-29 15:14:50.043659 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 6.68s 2025-08-29 15:14:50.043665 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 6.56s 2025-08-29 15:14:50.043672 | orchestrator | 2025-08-29 15:14:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:53.067556 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:53.068955 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:53.072185 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task 9ecba729-77e8-4c7e-86ac-65c38644001e is in state STARTED 2025-08-29 15:14:53.076956 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:53.077015 | orchestrator | 2025-08-29 15:14:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:56.128858 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:56.129465 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:56.130838 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task 9ecba729-77e8-4c7e-86ac-65c38644001e is in state STARTED 2025-08-29 15:14:56.131726 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:56.131771 | orchestrator | 2025-08-29 15:14:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:59.167808 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:14:59.168542 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:14:59.170267 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task 9ecba729-77e8-4c7e-86ac-65c38644001e is in state STARTED 2025-08-29 15:14:59.174189 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:14:59.174494 | orchestrator | 2025-08-29 15:14:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:02.215481 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:02.215588 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:02.217645 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task 9ecba729-77e8-4c7e-86ac-65c38644001e is in state SUCCESS 2025-08-29 15:15:02.220290 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:02.220933 | orchestrator | 2025-08-29 15:15:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:05.276264 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:05.278495 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:05.281174 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:05.282056 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:05.282130 | orchestrator | 2025-08-29 15:15:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:08.338196 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:08.340371 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:08.342928 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:08.344242 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:08.344308 | orchestrator | 2025-08-29 15:15:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:11.385668 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:11.388254 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:11.391538 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:11.393043 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:11.393129 | orchestrator | 2025-08-29 15:15:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:14.437663 | orchestrator | 2025-08-29 15:15:14 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:14.437758 | orchestrator | 2025-08-29 15:15:14 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:14.441772 | orchestrator | 2025-08-29 15:15:14 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:14.442432 | orchestrator | 2025-08-29 15:15:14 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:14.442464 | orchestrator | 2025-08-29 15:15:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:17.476259 | orchestrator | 2025-08-29 15:15:17 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:17.477128 | orchestrator | 2025-08-29 15:15:17 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:17.478738 | orchestrator | 2025-08-29 15:15:17 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:17.479642 | orchestrator | 2025-08-29 15:15:17 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:17.479779 | orchestrator | 2025-08-29 15:15:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:20.519837 | orchestrator | 2025-08-29 15:15:20 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:20.521481 | orchestrator | 2025-08-29 15:15:20 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:20.526682 | orchestrator | 2025-08-29 15:15:20 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:20.529412 | orchestrator | 2025-08-29 15:15:20 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:20.529856 | orchestrator | 2025-08-29 15:15:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:23.572656 | orchestrator | 2025-08-29 15:15:23 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:23.574328 | orchestrator | 2025-08-29 15:15:23 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:23.576298 | orchestrator | 2025-08-29 15:15:23 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:23.578004 | orchestrator | 2025-08-29 15:15:23 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:23.578142 | orchestrator | 2025-08-29 15:15:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:26.619837 | orchestrator | 2025-08-29 15:15:26 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:26.620584 | orchestrator | 2025-08-29 15:15:26 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:26.621493 | orchestrator | 2025-08-29 15:15:26 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:26.624374 | orchestrator | 2025-08-29 15:15:26 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:26.624429 | orchestrator | 2025-08-29 15:15:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:29.667347 | orchestrator | 2025-08-29 15:15:29 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:29.668385 | orchestrator | 2025-08-29 15:15:29 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:29.669318 | orchestrator | 2025-08-29 15:15:29 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:29.670569 | orchestrator | 2025-08-29 15:15:29 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:29.670735 | orchestrator | 2025-08-29 15:15:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:32.705269 | orchestrator | 2025-08-29 15:15:32 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:32.706303 | orchestrator | 2025-08-29 15:15:32 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:32.707581 | orchestrator | 2025-08-29 15:15:32 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:32.709775 | orchestrator | 2025-08-29 15:15:32 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:32.709822 | orchestrator | 2025-08-29 15:15:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:35.750928 | orchestrator | 2025-08-29 15:15:35 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:35.752018 | orchestrator | 2025-08-29 15:15:35 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:35.753144 | orchestrator | 2025-08-29 15:15:35 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:35.754833 | orchestrator | 2025-08-29 15:15:35 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:35.754882 | orchestrator | 2025-08-29 15:15:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:38.793676 | orchestrator | 2025-08-29 15:15:38 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:38.795882 | orchestrator | 2025-08-29 15:15:38 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:38.797935 | orchestrator | 2025-08-29 15:15:38 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:38.799968 | orchestrator | 2025-08-29 15:15:38 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:38.800007 | orchestrator | 2025-08-29 15:15:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:41.855352 | orchestrator | 2025-08-29 15:15:41 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:41.858103 | orchestrator | 2025-08-29 15:15:41 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:41.860864 | orchestrator | 2025-08-29 15:15:41 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:41.864884 | orchestrator | 2025-08-29 15:15:41 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:41.864964 | orchestrator | 2025-08-29 15:15:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:44.908204 | orchestrator | 2025-08-29 15:15:44 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:44.910079 | orchestrator | 2025-08-29 15:15:44 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:44.911964 | orchestrator | 2025-08-29 15:15:44 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:44.913177 | orchestrator | 2025-08-29 15:15:44 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:44.913241 | orchestrator | 2025-08-29 15:15:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:47.967458 | orchestrator | 2025-08-29 15:15:47 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:47.968182 | orchestrator | 2025-08-29 15:15:47 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:47.969827 | orchestrator | 2025-08-29 15:15:47 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:47.971062 | orchestrator | 2025-08-29 15:15:47 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:47.971120 | orchestrator | 2025-08-29 15:15:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:51.020530 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:51.021963 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:51.023698 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state STARTED 2025-08-29 15:15:51.025855 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:51.025908 | orchestrator | 2025-08-29 15:15:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:54.076918 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:54.077249 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:54.079257 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task 83b86b67-7939-4edb-b283-67e6c52b8a3c is in state SUCCESS 2025-08-29 15:15:54.080583 | orchestrator | 2025-08-29 15:15:54.080618 | orchestrator | 2025-08-29 15:15:54.080624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:15:54.080629 | orchestrator | 2025-08-29 15:15:54.080633 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:15:54.080637 | orchestrator | Friday 29 August 2025 15:14:58 +0000 (0:00:00.259) 0:00:00.259 ********* 2025-08-29 15:15:54.080642 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:54.080646 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:54.080650 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:54.080655 | orchestrator | 2025-08-29 15:15:54.080659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:15:54.080663 | orchestrator | Friday 29 August 2025 15:14:58 +0000 (0:00:00.352) 0:00:00.612 ********* 2025-08-29 15:15:54.080667 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:15:54.080672 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:15:54.080677 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:15:54.080683 | orchestrator | 2025-08-29 15:15:54.080689 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 15:15:54.080695 | orchestrator | 2025-08-29 15:15:54.080701 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 15:15:54.080707 | orchestrator | Friday 29 August 2025 15:14:59 +0000 (0:00:00.978) 0:00:01.590 ********* 2025-08-29 15:15:54.080713 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:54.080719 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:54.080724 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:54.080753 | orchestrator | 2025-08-29 15:15:54.080760 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:15:54.080773 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:54.080780 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:54.080785 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:54.080789 | orchestrator | 2025-08-29 15:15:54.080793 | orchestrator | 2025-08-29 15:15:54.080798 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:15:54.080870 | orchestrator | Friday 29 August 2025 15:15:00 +0000 (0:00:01.007) 0:00:02.598 ********* 2025-08-29 15:15:54.080876 | orchestrator | =============================================================================== 2025-08-29 15:15:54.080880 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.01s 2025-08-29 15:15:54.080884 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-08-29 15:15:54.080889 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-08-29 15:15:54.080895 | orchestrator | 2025-08-29 15:15:54.080910 | orchestrator | 2025-08-29 15:15:54.080925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:15:54.080938 | orchestrator | 2025-08-29 15:15:54.080944 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:15:54.080949 | orchestrator | Friday 29 August 2025 15:13:43 +0000 (0:00:00.654) 0:00:00.654 ********* 2025-08-29 15:15:54.080955 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:54.080961 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:54.080966 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:54.080973 | orchestrator | 2025-08-29 15:15:54.080979 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:15:54.080985 | orchestrator | Friday 29 August 2025 15:13:45 +0000 (0:00:01.456) 0:00:02.111 ********* 2025-08-29 15:15:54.080991 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 15:15:54.080997 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 15:15:54.081050 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 15:15:54.081056 | orchestrator | 2025-08-29 15:15:54.081059 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 15:15:54.081063 | orchestrator | 2025-08-29 15:15:54.081067 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:15:54.081071 | orchestrator | Friday 29 August 2025 15:13:46 +0000 (0:00:01.092) 0:00:03.203 ********* 2025-08-29 15:15:54.081074 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:54.081078 | orchestrator | 2025-08-29 15:15:54.081082 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 15:15:54.081086 | orchestrator | Friday 29 August 2025 15:13:47 +0000 (0:00:01.414) 0:00:04.617 ********* 2025-08-29 15:15:54.081090 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 15:15:54.081094 | orchestrator | 2025-08-29 15:15:54.081098 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 15:15:54.081102 | orchestrator | Friday 29 August 2025 15:13:51 +0000 (0:00:03.928) 0:00:08.546 ********* 2025-08-29 15:15:54.081106 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 15:15:54.081110 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 15:15:54.081114 | orchestrator | 2025-08-29 15:15:54.081118 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 15:15:54.081121 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:07.174) 0:00:15.721 ********* 2025-08-29 15:15:54.081125 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:15:54.081129 | orchestrator | 2025-08-29 15:15:54.081133 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 15:15:54.081137 | orchestrator | Friday 29 August 2025 15:14:02 +0000 (0:00:03.248) 0:00:18.970 ********* 2025-08-29 15:15:54.081151 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:15:54.081155 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 15:15:54.081160 | orchestrator | 2025-08-29 15:15:54.081163 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 15:15:54.081167 | orchestrator | Friday 29 August 2025 15:14:06 +0000 (0:00:04.213) 0:00:23.184 ********* 2025-08-29 15:15:54.081177 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:15:54.081182 | orchestrator | 2025-08-29 15:15:54.081186 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 15:15:54.081191 | orchestrator | Friday 29 August 2025 15:14:10 +0000 (0:00:04.209) 0:00:27.393 ********* 2025-08-29 15:15:54.081195 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 15:15:54.081199 | orchestrator | 2025-08-29 15:15:54.081204 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 15:15:54.081208 | orchestrator | Friday 29 August 2025 15:14:16 +0000 (0:00:05.368) 0:00:32.762 ********* 2025-08-29 15:15:54.081212 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.081216 | orchestrator | 2025-08-29 15:15:54.081221 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 15:15:54.081225 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:03.592) 0:00:36.354 ********* 2025-08-29 15:15:54.081229 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.081233 | orchestrator | 2025-08-29 15:15:54.081237 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 15:15:54.081242 | orchestrator | Friday 29 August 2025 15:14:23 +0000 (0:00:04.097) 0:00:40.452 ********* 2025-08-29 15:15:54.081246 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.081250 | orchestrator | 2025-08-29 15:15:54.081254 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 15:15:54.081259 | orchestrator | Friday 29 August 2025 15:14:27 +0000 (0:00:03.781) 0:00:44.233 ********* 2025-08-29 15:15:54.081265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081313 | orchestrator | 2025-08-29 15:15:54.081317 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 15:15:54.081322 | orchestrator | Friday 29 August 2025 15:14:29 +0000 (0:00:01.855) 0:00:46.089 ********* 2025-08-29 15:15:54.081326 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:54.081330 | orchestrator | 2025-08-29 15:15:54.081335 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 15:15:54.081339 | orchestrator | Friday 29 August 2025 15:14:29 +0000 (0:00:00.166) 0:00:46.255 ********* 2025-08-29 15:15:54.081343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:54.081348 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:54.081355 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:54.081359 | orchestrator | 2025-08-29 15:15:54.081364 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 15:15:54.081368 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:00.851) 0:00:47.107 ********* 2025-08-29 15:15:54.081372 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:15:54.081377 | orchestrator | 2025-08-29 15:15:54.081381 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 15:15:54.081386 | orchestrator | Friday 29 August 2025 15:14:31 +0000 (0:00:01.188) 0:00:48.295 ********* 2025-08-29 15:15:54.081394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081428 | orchestrator | 2025-08-29 15:15:54.081431 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 15:15:54.081438 | orchestrator | Friday 29 August 2025 15:14:34 +0000 (0:00:02.727) 0:00:51.023 ********* 2025-08-29 15:15:54.081442 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:54.081446 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:54.081449 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:54.081453 | orchestrator | 2025-08-29 15:15:54.081457 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:15:54.081461 | orchestrator | Friday 29 August 2025 15:14:34 +0000 (0:00:00.356) 0:00:51.379 ********* 2025-08-29 15:15:54.081465 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:54.081469 | orchestrator | 2025-08-29 15:15:54.081472 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 15:15:54.081476 | orchestrator | Friday 29 August 2025 15:14:35 +0000 (0:00:00.916) 0:00:52.296 ********* 2025-08-29 15:15:54.081480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081514 | orchestrator | 2025-08-29 15:15:54.081518 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 15:15:54.081521 | orchestrator | Friday 29 August 2025 15:14:38 +0000 (0:00:02.710) 0:00:55.006 ********* 2025-08-29 15:15:54.081526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081539 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:54.081546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081554 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:54.081558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081570 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:54.081573 | orchestrator | 2025-08-29 15:15:54.081583 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 15:15:54.081587 | orchestrator | Friday 29 August 2025 15:14:39 +0000 (0:00:00.803) 0:00:55.810 ********* 2025-08-29 15:15:54.081591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:54.081608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081620 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:54.081627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081635 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:54.081638 | orchestrator | 2025-08-29 15:15:54.081642 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 15:15:54.081646 | orchestrator | Friday 29 August 2025 15:14:40 +0000 (0:00:01.507) 0:00:57.318 ********* 2025-08-29 15:15:54.081799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081836 | orchestrator | 2025-08-29 15:15:54.081840 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 15:15:54.081844 | orchestrator | Friday 29 August 2025 15:14:43 +0000 (0:00:02.645) 0:00:59.964 ********* 2025-08-29 15:15:54.081848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081885 | orchestrator | 2025-08-29 15:15:54.081888 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 15:15:54.081892 | orchestrator | Friday 29 August 2025 15:14:51 +0000 (0:00:07.715) 0:01:07.679 ********* 2025-08-29 15:15:54.081899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:54.081914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081926 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:54.081930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:54.081936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:54.081940 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:54.081944 | orchestrator | 2025-08-29 15:15:54.081948 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 15:15:54.081952 | orchestrator | Friday 29 August 2025 15:14:52 +0000 (0:00:01.633) 0:01:09.313 ********* 2025-08-29 15:15:54.081958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:54.081975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:54.081992 | orchestrator | 2025-08-29 15:15:54.081995 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:15:54.081999 | orchestrator | Friday 29 August 2025 15:14:56 +0000 (0:00:03.449) 0:01:12.765 ********* 2025-08-29 15:15:54.082003 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:54.082007 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:54.082096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:54.082102 | orchestrator | 2025-08-29 15:15:54.082106 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 15:15:54.082110 | orchestrator | Friday 29 August 2025 15:14:56 +0000 (0:00:00.666) 0:01:13.431 ********* 2025-08-29 15:15:54.082114 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.082118 | orchestrator | 2025-08-29 15:15:54.082122 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 15:15:54.082125 | orchestrator | Friday 29 August 2025 15:14:59 +0000 (0:00:02.394) 0:01:15.826 ********* 2025-08-29 15:15:54.082129 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.082133 | orchestrator | 2025-08-29 15:15:54.082136 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 15:15:54.082140 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:02.300) 0:01:18.126 ********* 2025-08-29 15:15:54.082144 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.082148 | orchestrator | 2025-08-29 15:15:54.082152 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:15:54.082155 | orchestrator | Friday 29 August 2025 15:15:18 +0000 (0:00:16.684) 0:01:34.811 ********* 2025-08-29 15:15:54.082159 | orchestrator | 2025-08-29 15:15:54.082163 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:15:54.082167 | orchestrator | Friday 29 August 2025 15:15:18 +0000 (0:00:00.076) 0:01:34.887 ********* 2025-08-29 15:15:54.082170 | orchestrator | 2025-08-29 15:15:54.082174 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:15:54.082178 | orchestrator | Friday 29 August 2025 15:15:18 +0000 (0:00:00.072) 0:01:34.960 ********* 2025-08-29 15:15:54.082182 | orchestrator | 2025-08-29 15:15:54.082185 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 15:15:54.082189 | orchestrator | Friday 29 August 2025 15:15:18 +0000 (0:00:00.068) 0:01:35.029 ********* 2025-08-29 15:15:54.082193 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.082197 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:54.082200 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:54.082204 | orchestrator | 2025-08-29 15:15:54.082208 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 15:15:54.082212 | orchestrator | Friday 29 August 2025 15:15:39 +0000 (0:00:21.580) 0:01:56.610 ********* 2025-08-29 15:15:54.082215 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:54.082219 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:54.082223 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:54.082227 | orchestrator | 2025-08-29 15:15:54.082231 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:15:54.082235 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:15:54.082241 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:15:54.082244 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:15:54.082248 | orchestrator | 2025-08-29 15:15:54.082252 | orchestrator | 2025-08-29 15:15:54.082256 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:15:54.082273 | orchestrator | Friday 29 August 2025 15:15:50 +0000 (0:00:11.004) 0:02:07.615 ********* 2025-08-29 15:15:54.082277 | orchestrator | =============================================================================== 2025-08-29 15:15:54.082280 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.58s 2025-08-29 15:15:54.082291 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.68s 2025-08-29 15:15:54.082295 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.01s 2025-08-29 15:15:54.082304 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.72s 2025-08-29 15:15:54.082308 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.17s 2025-08-29 15:15:54.082312 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 5.37s 2025-08-29 15:15:54.082315 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.22s 2025-08-29 15:15:54.082319 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.21s 2025-08-29 15:15:54.082323 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.10s 2025-08-29 15:15:54.082327 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.93s 2025-08-29 15:15:54.082330 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.78s 2025-08-29 15:15:54.082334 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.59s 2025-08-29 15:15:54.082338 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.45s 2025-08-29 15:15:54.082341 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.25s 2025-08-29 15:15:54.082345 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.73s 2025-08-29 15:15:54.082349 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.71s 2025-08-29 15:15:54.082355 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.65s 2025-08-29 15:15:54.082359 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.39s 2025-08-29 15:15:54.082363 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.30s 2025-08-29 15:15:54.082367 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.86s 2025-08-29 15:15:54.082370 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:54.082374 | orchestrator | 2025-08-29 15:15:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:57.138368 | orchestrator | 2025-08-29 15:15:57 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:15:57.138889 | orchestrator | 2025-08-29 15:15:57 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:15:57.139858 | orchestrator | 2025-08-29 15:15:57 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:15:57.139957 | orchestrator | 2025-08-29 15:15:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:00.175861 | orchestrator | 2025-08-29 15:16:00 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:00.177202 | orchestrator | 2025-08-29 15:16:00 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:00.177935 | orchestrator | 2025-08-29 15:16:00 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:00.178168 | orchestrator | 2025-08-29 15:16:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:03.220131 | orchestrator | 2025-08-29 15:16:03 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:03.220228 | orchestrator | 2025-08-29 15:16:03 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:03.224677 | orchestrator | 2025-08-29 15:16:03 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:03.224763 | orchestrator | 2025-08-29 15:16:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:06.295262 | orchestrator | 2025-08-29 15:16:06 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:06.298348 | orchestrator | 2025-08-29 15:16:06 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:06.301285 | orchestrator | 2025-08-29 15:16:06 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:06.302103 | orchestrator | 2025-08-29 15:16:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:09.382291 | orchestrator | 2025-08-29 15:16:09 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:09.383056 | orchestrator | 2025-08-29 15:16:09 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:09.383903 | orchestrator | 2025-08-29 15:16:09 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:09.384102 | orchestrator | 2025-08-29 15:16:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:12.445427 | orchestrator | 2025-08-29 15:16:12 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:12.447828 | orchestrator | 2025-08-29 15:16:12 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:12.449456 | orchestrator | 2025-08-29 15:16:12 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:12.449512 | orchestrator | 2025-08-29 15:16:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:15.540056 | orchestrator | 2025-08-29 15:16:15 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:15.540437 | orchestrator | 2025-08-29 15:16:15 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:15.541801 | orchestrator | 2025-08-29 15:16:15 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:15.543182 | orchestrator | 2025-08-29 15:16:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:18.601721 | orchestrator | 2025-08-29 15:16:18 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:18.603278 | orchestrator | 2025-08-29 15:16:18 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:18.605435 | orchestrator | 2025-08-29 15:16:18 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:18.605631 | orchestrator | 2025-08-29 15:16:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:21.650329 | orchestrator | 2025-08-29 15:16:21 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:21.652639 | orchestrator | 2025-08-29 15:16:21 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:21.653090 | orchestrator | 2025-08-29 15:16:21 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:21.653119 | orchestrator | 2025-08-29 15:16:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:24.708665 | orchestrator | 2025-08-29 15:16:24 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:24.709682 | orchestrator | 2025-08-29 15:16:24 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:24.711558 | orchestrator | 2025-08-29 15:16:24 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:24.711594 | orchestrator | 2025-08-29 15:16:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:27.757168 | orchestrator | 2025-08-29 15:16:27 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:27.759646 | orchestrator | 2025-08-29 15:16:27 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:27.761559 | orchestrator | 2025-08-29 15:16:27 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:27.761660 | orchestrator | 2025-08-29 15:16:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:30.805739 | orchestrator | 2025-08-29 15:16:30 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:30.805822 | orchestrator | 2025-08-29 15:16:30 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:30.807855 | orchestrator | 2025-08-29 15:16:30 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:30.807900 | orchestrator | 2025-08-29 15:16:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:33.857092 | orchestrator | 2025-08-29 15:16:33 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:33.858136 | orchestrator | 2025-08-29 15:16:33 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:33.859936 | orchestrator | 2025-08-29 15:16:33 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:33.860120 | orchestrator | 2025-08-29 15:16:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:36.911620 | orchestrator | 2025-08-29 15:16:36 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:36.913712 | orchestrator | 2025-08-29 15:16:36 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:36.915710 | orchestrator | 2025-08-29 15:16:36 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:36.915758 | orchestrator | 2025-08-29 15:16:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:39.960534 | orchestrator | 2025-08-29 15:16:39 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state STARTED 2025-08-29 15:16:39.962346 | orchestrator | 2025-08-29 15:16:39 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state STARTED 2025-08-29 15:16:39.963715 | orchestrator | 2025-08-29 15:16:39 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:39.963780 | orchestrator | 2025-08-29 15:16:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:43.020445 | orchestrator | 2025-08-29 15:16:43 | INFO  | Task de679f0c-5047-4dc0-ab29-59e133e20039 is in state SUCCESS 2025-08-29 15:16:43.023317 | orchestrator | 2025-08-29 15:16:43.023375 | orchestrator | 2025-08-29 15:16:43.023383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:43.023391 | orchestrator | 2025-08-29 15:16:43.023397 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 15:16:43.023404 | orchestrator | Friday 29 August 2025 15:05:22 +0000 (0:00:00.668) 0:00:00.668 ********* 2025-08-29 15:16:43.023411 | orchestrator | changed: [testbed-manager] 2025-08-29 15:16:43.023419 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023426 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.023432 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.023438 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.023444 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.023450 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.023456 | orchestrator | 2025-08-29 15:16:43.023462 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:43.023468 | orchestrator | Friday 29 August 2025 15:05:23 +0000 (0:00:01.458) 0:00:02.127 ********* 2025-08-29 15:16:43.023474 | orchestrator | changed: [testbed-manager] 2025-08-29 15:16:43.023480 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023486 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.023492 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.023497 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.023527 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.023534 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.023540 | orchestrator | 2025-08-29 15:16:43.023546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:43.023552 | orchestrator | Friday 29 August 2025 15:05:24 +0000 (0:00:00.907) 0:00:03.035 ********* 2025-08-29 15:16:43.023558 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 15:16:43.023564 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:16:43.023570 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:16:43.023576 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:16:43.023581 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 15:16:43.023587 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 15:16:43.023677 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 15:16:43.023689 | orchestrator | 2025-08-29 15:16:43.023699 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 15:16:43.023708 | orchestrator | 2025-08-29 15:16:43.023712 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:16:43.023716 | orchestrator | Friday 29 August 2025 15:05:26 +0000 (0:00:01.726) 0:00:04.762 ********* 2025-08-29 15:16:43.023721 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.023724 | orchestrator | 2025-08-29 15:16:43.023728 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 15:16:43.023732 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:01.224) 0:00:05.986 ********* 2025-08-29 15:16:43.023736 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 15:16:43.023741 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 15:16:43.023782 | orchestrator | 2025-08-29 15:16:43.023786 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 15:16:43.023790 | orchestrator | Friday 29 August 2025 15:05:31 +0000 (0:00:03.889) 0:00:09.876 ********* 2025-08-29 15:16:43.023794 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:16:43.023798 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:16:43.023802 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023811 | orchestrator | 2025-08-29 15:16:43.023815 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:16:43.023824 | orchestrator | Friday 29 August 2025 15:05:34 +0000 (0:00:03.533) 0:00:13.410 ********* 2025-08-29 15:16:43.023828 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023832 | orchestrator | 2025-08-29 15:16:43.023836 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 15:16:43.023839 | orchestrator | Friday 29 August 2025 15:05:35 +0000 (0:00:00.828) 0:00:14.238 ********* 2025-08-29 15:16:43.023843 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023847 | orchestrator | 2025-08-29 15:16:43.023851 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 15:16:43.023854 | orchestrator | Friday 29 August 2025 15:05:37 +0000 (0:00:01.999) 0:00:16.238 ********* 2025-08-29 15:16:43.023858 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023862 | orchestrator | 2025-08-29 15:16:43.023866 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:43.023870 | orchestrator | Friday 29 August 2025 15:05:42 +0000 (0:00:05.041) 0:00:21.279 ********* 2025-08-29 15:16:43.023874 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.023889 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.023894 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.023898 | orchestrator | 2025-08-29 15:16:43.023917 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:16:43.023922 | orchestrator | Friday 29 August 2025 15:05:43 +0000 (0:00:00.628) 0:00:21.908 ********* 2025-08-29 15:16:43.023933 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.023937 | orchestrator | 2025-08-29 15:16:43.023940 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 15:16:43.023945 | orchestrator | Friday 29 August 2025 15:06:12 +0000 (0:00:29.111) 0:00:51.019 ********* 2025-08-29 15:16:43.023949 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.023969 | orchestrator | 2025-08-29 15:16:43.023973 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:16:43.023976 | orchestrator | Friday 29 August 2025 15:06:29 +0000 (0:00:16.721) 0:01:07.741 ********* 2025-08-29 15:16:43.023980 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.023984 | orchestrator | 2025-08-29 15:16:43.023988 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:16:43.023991 | orchestrator | Friday 29 August 2025 15:06:43 +0000 (0:00:14.151) 0:01:21.892 ********* 2025-08-29 15:16:43.024005 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.024009 | orchestrator | 2025-08-29 15:16:43.024013 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 15:16:43.024017 | orchestrator | Friday 29 August 2025 15:06:45 +0000 (0:00:02.450) 0:01:24.342 ********* 2025-08-29 15:16:43.024021 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024024 | orchestrator | 2025-08-29 15:16:43.024028 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:43.024032 | orchestrator | Friday 29 August 2025 15:06:46 +0000 (0:00:00.651) 0:01:24.994 ********* 2025-08-29 15:16:43.024036 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.024040 | orchestrator | 2025-08-29 15:16:43.024044 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:16:43.024048 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:00.682) 0:01:25.677 ********* 2025-08-29 15:16:43.024051 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.024055 | orchestrator | 2025-08-29 15:16:43.024059 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:16:43.024063 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:17.209) 0:01:42.887 ********* 2025-08-29 15:16:43.024066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024074 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024078 | orchestrator | 2025-08-29 15:16:43.024081 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 15:16:43.024085 | orchestrator | 2025-08-29 15:16:43.024089 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:16:43.024093 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:00.405) 0:01:43.292 ********* 2025-08-29 15:16:43.024097 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.024100 | orchestrator | 2025-08-29 15:16:43.024104 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 15:16:43.024108 | orchestrator | Friday 29 August 2025 15:07:05 +0000 (0:00:01.055) 0:01:44.348 ********* 2025-08-29 15:16:43.024112 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024119 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.024123 | orchestrator | 2025-08-29 15:16:43.024127 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 15:16:43.024130 | orchestrator | Friday 29 August 2025 15:07:07 +0000 (0:00:02.080) 0:01:46.429 ********* 2025-08-29 15:16:43.024134 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024138 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024142 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.024145 | orchestrator | 2025-08-29 15:16:43.024149 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:16:43.024153 | orchestrator | Friday 29 August 2025 15:07:09 +0000 (0:00:02.104) 0:01:48.533 ********* 2025-08-29 15:16:43.024161 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024168 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024172 | orchestrator | 2025-08-29 15:16:43.024176 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:16:43.024180 | orchestrator | Friday 29 August 2025 15:07:10 +0000 (0:00:00.446) 0:01:48.979 ********* 2025-08-29 15:16:43.024183 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:16:43.024187 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024191 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:16:43.024195 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024199 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:16:43.024202 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 15:16:43.024206 | orchestrator | 2025-08-29 15:16:43.024210 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:16:43.024214 | orchestrator | Friday 29 August 2025 15:07:19 +0000 (0:00:08.686) 0:01:57.666 ********* 2025-08-29 15:16:43.024218 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024221 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024225 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024229 | orchestrator | 2025-08-29 15:16:43.024233 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:16:43.024236 | orchestrator | Friday 29 August 2025 15:07:19 +0000 (0:00:00.492) 0:01:58.158 ********* 2025-08-29 15:16:43.024240 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:16:43.024244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024248 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:16:43.024251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024255 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:16:43.024259 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024263 | orchestrator | 2025-08-29 15:16:43.024266 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:16:43.024273 | orchestrator | Friday 29 August 2025 15:07:20 +0000 (0:00:01.018) 0:01:59.177 ********* 2025-08-29 15:16:43.024277 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024281 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.024285 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024289 | orchestrator | 2025-08-29 15:16:43.024305 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 15:16:43.024311 | orchestrator | Friday 29 August 2025 15:07:21 +0000 (0:00:00.922) 0:02:00.099 ********* 2025-08-29 15:16:43.024316 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024322 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024328 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.024334 | orchestrator | 2025-08-29 15:16:43.024339 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 15:16:43.024346 | orchestrator | Friday 29 August 2025 15:07:22 +0000 (0:00:01.366) 0:02:01.466 ********* 2025-08-29 15:16:43.024353 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024359 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024369 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.024374 | orchestrator | 2025-08-29 15:16:43.024431 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 15:16:43.024437 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:05.202) 0:02:06.668 ********* 2025-08-29 15:16:43.024443 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024467 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.024472 | orchestrator | 2025-08-29 15:16:43.024478 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:16:43.024491 | orchestrator | Friday 29 August 2025 15:07:51 +0000 (0:00:23.349) 0:02:30.018 ********* 2025-08-29 15:16:43.024502 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024507 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024513 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.024519 | orchestrator | 2025-08-29 15:16:43.024525 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:16:43.024530 | orchestrator | Friday 29 August 2025 15:08:06 +0000 (0:00:14.856) 0:02:44.874 ********* 2025-08-29 15:16:43.024536 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.024542 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024547 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024553 | orchestrator | 2025-08-29 15:16:43.024559 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 15:16:43.024564 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:04.372) 0:02:49.247 ********* 2025-08-29 15:16:43.024570 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024576 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024581 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.024587 | orchestrator | 2025-08-29 15:16:43.024593 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 15:16:43.024599 | orchestrator | Friday 29 August 2025 15:08:25 +0000 (0:00:14.438) 0:03:03.685 ********* 2025-08-29 15:16:43.024605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024610 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024616 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024621 | orchestrator | 2025-08-29 15:16:43.024627 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:16:43.024633 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:01.506) 0:03:05.192 ********* 2025-08-29 15:16:43.024639 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024645 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.024650 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.024656 | orchestrator | 2025-08-29 15:16:43.024661 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 15:16:43.024667 | orchestrator | 2025-08-29 15:16:43.024673 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:43.024679 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:00.347) 0:03:05.540 ********* 2025-08-29 15:16:43.024684 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.024691 | orchestrator | 2025-08-29 15:16:43.024697 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 15:16:43.024703 | orchestrator | Friday 29 August 2025 15:08:27 +0000 (0:00:00.554) 0:03:06.095 ********* 2025-08-29 15:16:43.024709 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 15:16:43.024714 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 15:16:43.024720 | orchestrator | 2025-08-29 15:16:43.024725 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 15:16:43.024731 | orchestrator | Friday 29 August 2025 15:08:30 +0000 (0:00:03.511) 0:03:09.606 ********* 2025-08-29 15:16:43.024737 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 15:16:43.024744 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 15:16:43.024751 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 15:16:43.024757 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 15:16:43.024763 | orchestrator | 2025-08-29 15:16:43.024769 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 15:16:43.024775 | orchestrator | Friday 29 August 2025 15:08:37 +0000 (0:00:06.429) 0:03:16.035 ********* 2025-08-29 15:16:43.024785 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:16:43.024791 | orchestrator | 2025-08-29 15:16:43.024798 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 15:16:43.024804 | orchestrator | Friday 29 August 2025 15:08:40 +0000 (0:00:03.431) 0:03:19.466 ********* 2025-08-29 15:16:43.024815 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:16:43.024819 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 15:16:43.024823 | orchestrator | 2025-08-29 15:16:43.024826 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 15:16:43.024830 | orchestrator | Friday 29 August 2025 15:08:44 +0000 (0:00:03.854) 0:03:23.321 ********* 2025-08-29 15:16:43.024834 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:16:43.024838 | orchestrator | 2025-08-29 15:16:43.024841 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 15:16:43.024845 | orchestrator | Friday 29 August 2025 15:08:47 +0000 (0:00:03.270) 0:03:26.591 ********* 2025-08-29 15:16:43.024849 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 15:16:43.024852 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 15:16:43.024856 | orchestrator | 2025-08-29 15:16:43.024860 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:16:43.024867 | orchestrator | Friday 29 August 2025 15:08:55 +0000 (0:00:07.750) 0:03:34.342 ********* 2025-08-29 15:16:43.024876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.024883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.024895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.024904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.024910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.024914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.024918 | orchestrator | 2025-08-29 15:16:43.024922 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 15:16:43.024926 | orchestrator | Friday 29 August 2025 15:08:57 +0000 (0:00:01.585) 0:03:35.927 ********* 2025-08-29 15:16:43.024929 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024933 | orchestrator | 2025-08-29 15:16:43.024937 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 15:16:43.024941 | orchestrator | Friday 29 August 2025 15:08:57 +0000 (0:00:00.153) 0:03:36.081 ********* 2025-08-29 15:16:43.024944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.024948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.025000 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.025005 | orchestrator | 2025-08-29 15:16:43.025008 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 15:16:43.025015 | orchestrator | Friday 29 August 2025 15:08:58 +0000 (0:00:00.696) 0:03:36.778 ********* 2025-08-29 15:16:43.025019 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:43.025023 | orchestrator | 2025-08-29 15:16:43.025026 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 15:16:43.025030 | orchestrator | Friday 29 August 2025 15:08:58 +0000 (0:00:00.846) 0:03:37.624 ********* 2025-08-29 15:16:43.025034 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.025038 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.025041 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.025045 | orchestrator | 2025-08-29 15:16:43.025049 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:43.025053 | orchestrator | Friday 29 August 2025 15:08:59 +0000 (0:00:00.379) 0:03:38.003 ********* 2025-08-29 15:16:43.025056 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.025060 | orchestrator | 2025-08-29 15:16:43.025064 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:16:43.025085 | orchestrator | Friday 29 August 2025 15:08:59 +0000 (0:00:00.612) 0:03:38.615 ********* 2025-08-29 15:16:43.025096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025146 | orchestrator | 2025-08-29 15:16:43.025150 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:43.025153 | orchestrator | Friday 29 August 2025 15:09:02 +0000 (0:00:03.023) 0:03:41.639 ********* 2025-08-29 15:16:43.025158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025183 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.025190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025210 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.025214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.025235 | orchestrator | 2025-08-29 15:16:43.025239 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:16:43.025243 | orchestrator | Friday 29 August 2025 15:09:03 +0000 (0:00:00.710) 0:03:42.350 ********* 2025-08-29 15:16:43.025247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.025604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025628 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.025632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025646 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.025650 | orchestrator | 2025-08-29 15:16:43.025654 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 15:16:43.025658 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:00.990) 0:03:43.340 ********* 2025-08-29 15:16:43.025666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025726 | orchestrator | 2025-08-29 15:16:43.025730 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 15:16:43.025733 | orchestrator | Friday 29 August 2025 15:09:08 +0000 (0:00:03.881) 0:03:47.222 ********* 2025-08-29 15:16:43.025737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.025802 | orchestrator | 2025-08-29 15:16:43.025806 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 15:16:43.025810 | orchestrator | Friday 29 August 2025 15:09:19 +0000 (0:00:10.627) 0:03:57.850 ********* 2025-08-29 15:16:43.025819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.025835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025843 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.025848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:43.025857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.025863 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.025867 | orchestrator | 2025-08-29 15:16:43.025871 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 15:16:43.025875 | orchestrator | Friday 29 August 2025 15:09:21 +0000 (0:00:01.894) 0:03:59.745 ********* 2025-08-29 15:16:43.025879 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.025882 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.025886 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.025890 | orchestrator | 2025-08-29 15:16:43.025894 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 15:16:43.025898 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:03.099) 0:04:02.845 ********* 2025-08-29 15:16:43.025901 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.025905 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.025909 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.025913 | orchestrator | 2025-08-29 15:16:43.025916 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 15:16:43.025920 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:00.674) 0:04:03.519 ********* 2025-08-29 15:16:43.025970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.025983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.026125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:43.026135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.026140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.026144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.026148 | orchestrator | 2025-08-29 15:16:43.026153 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:16:43.026166 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:02.268) 0:04:05.787 ********* 2025-08-29 15:16:43.026170 | orchestrator | 2025-08-29 15:16:43.026173 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:16:43.026178 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:00.154) 0:04:05.942 ********* 2025-08-29 15:16:43.026181 | orchestrator | 2025-08-29 15:16:43.026185 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:16:43.026189 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:00.362) 0:04:06.304 ********* 2025-08-29 15:16:43.026193 | orchestrator | 2025-08-29 15:16:43.026196 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 15:16:43.026364 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:00.329) 0:04:06.634 ********* 2025-08-29 15:16:43.026372 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.026379 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.026385 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.026393 | orchestrator | 2025-08-29 15:16:43.026407 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 15:16:43.026413 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:25.679) 0:04:32.314 ********* 2025-08-29 15:16:43.026419 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.026425 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.026431 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.026437 | orchestrator | 2025-08-29 15:16:43.026443 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 15:16:43.026449 | orchestrator | 2025-08-29 15:16:43.026454 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:43.026460 | orchestrator | Friday 29 August 2025 15:10:09 +0000 (0:00:16.343) 0:04:48.658 ********* 2025-08-29 15:16:43.026467 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.026474 | orchestrator | 2025-08-29 15:16:43.026887 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:43.026923 | orchestrator | Friday 29 August 2025 15:10:12 +0000 (0:00:02.555) 0:04:51.214 ********* 2025-08-29 15:16:43.026930 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.026936 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.026942 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.026947 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.026968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.026975 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.026981 | orchestrator | 2025-08-29 15:16:43.026987 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 15:16:43.026994 | orchestrator | Friday 29 August 2025 15:10:15 +0000 (0:00:03.346) 0:04:54.560 ********* 2025-08-29 15:16:43.026999 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.027003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.027007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.027011 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:16:43.027016 | orchestrator | 2025-08-29 15:16:43.027020 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 15:16:43.027024 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:01.425) 0:04:55.986 ********* 2025-08-29 15:16:43.027028 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:16:43.027032 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:16:43.027036 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:16:43.027040 | orchestrator | 2025-08-29 15:16:43.027044 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 15:16:43.027056 | orchestrator | Friday 29 August 2025 15:10:19 +0000 (0:00:02.471) 0:04:58.457 ********* 2025-08-29 15:16:43.027060 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:16:43.027064 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:16:43.027068 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:16:43.027072 | orchestrator | 2025-08-29 15:16:43.027076 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 15:16:43.027080 | orchestrator | Friday 29 August 2025 15:10:22 +0000 (0:00:02.848) 0:05:01.305 ********* 2025-08-29 15:16:43.027084 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 15:16:43.027088 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.027098 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 15:16:43.027110 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.027114 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 15:16:43.027118 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.027122 | orchestrator | 2025-08-29 15:16:43.027126 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 15:16:43.027129 | orchestrator | Friday 29 August 2025 15:10:24 +0000 (0:00:02.052) 0:05:03.357 ********* 2025-08-29 15:16:43.027133 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:16:43.027137 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:16:43.027141 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:16:43.027145 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:16:43.027149 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:16:43.027153 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:16:43.027156 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:16:43.027160 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:16:43.027164 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:16:43.027167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.027171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.027175 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:16:43.027179 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:16:43.027182 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:16:43.027186 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.027190 | orchestrator | 2025-08-29 15:16:43.027194 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 15:16:43.027215 | orchestrator | Friday 29 August 2025 15:10:26 +0000 (0:00:02.287) 0:05:05.644 ********* 2025-08-29 15:16:43.027219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.027223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.027227 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.027236 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.027240 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.027243 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.027247 | orchestrator | 2025-08-29 15:16:43.027251 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 15:16:43.027255 | orchestrator | Friday 29 August 2025 15:10:28 +0000 (0:00:01.939) 0:05:07.584 ********* 2025-08-29 15:16:43.027258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.027262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.027266 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.027269 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.027273 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.027277 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.027281 | orchestrator | 2025-08-29 15:16:43.027284 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:16:43.027288 | orchestrator | Friday 29 August 2025 15:10:32 +0000 (0:00:03.971) 0:05:11.555 ********* 2025-08-29 15:16:43.027356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027430 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027482 | orchestrator | 2025-08-29 15:16:43.027486 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:43.027490 | orchestrator | Friday 29 August 2025 15:10:38 +0000 (0:00:05.987) 0:05:17.543 ********* 2025-08-29 15:16:43.027495 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.027499 | orchestrator | 2025-08-29 15:16:43.027503 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:16:43.027507 | orchestrator | Friday 29 August 2025 15:10:41 +0000 (0:00:02.551) 0:05:20.094 ********* 2025-08-29 15:16:43.027513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.027669 | orchestrator | 2025-08-29 15:16:43.027675 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:43.027683 | orchestrator | Friday 29 August 2025 15:10:48 +0000 (0:00:06.950) 0:05:27.045 ********* 2025-08-29 15:16:43.027689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.027696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.027705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027714 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.027740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.027747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.027753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027759 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.027766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.027773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027779 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.027790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.027816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.027827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027835 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.027841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.027846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027852 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.027857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.027873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027879 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.027885 | orchestrator | 2025-08-29 15:16:43.027890 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:16:43.027896 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:02.490) 0:05:29.535 ********* 2025-08-29 15:16:43.027917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.027924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.027931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.027937 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.027943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.027979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.028005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.028012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.028018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.028025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.028032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.028042 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.028049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.028060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.028067 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.028147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.028157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.028164 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.028183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.028190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.028203 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.028210 | orchestrator | 2025-08-29 15:16:43.028216 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:43.028222 | orchestrator | Friday 29 August 2025 15:10:57 +0000 (0:00:06.550) 0:05:36.085 ********* 2025-08-29 15:16:43.028227 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.028233 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.028238 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.028244 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:16:43.028251 | orchestrator | 2025-08-29 15:16:43.028257 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 15:16:43.028262 | orchestrator | Friday 29 August 2025 15:11:00 +0000 (0:00:03.093) 0:05:39.179 ********* 2025-08-29 15:16:43.028268 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:16:43.028273 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:16:43.028281 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:16:43.028287 | orchestrator | 2025-08-29 15:16:43.028293 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 15:16:43.028300 | orchestrator | Friday 29 August 2025 15:11:03 +0000 (0:00:02.743) 0:05:41.923 ********* 2025-08-29 15:16:43.028306 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:16:43.028311 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:16:43.028317 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:16:43.028323 | orchestrator | 2025-08-29 15:16:43.028332 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 15:16:43.028338 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:05.103) 0:05:47.026 ********* 2025-08-29 15:16:43.028343 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:16:43.028350 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:16:43.028356 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:16:43.028362 | orchestrator | 2025-08-29 15:16:43.028367 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 15:16:43.028373 | orchestrator | Friday 29 August 2025 15:11:09 +0000 (0:00:01.101) 0:05:48.128 ********* 2025-08-29 15:16:43.028378 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:16:43.028384 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:16:43.028390 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:16:43.028396 | orchestrator | 2025-08-29 15:16:43.028402 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 15:16:43.028408 | orchestrator | Friday 29 August 2025 15:11:11 +0000 (0:00:01.698) 0:05:49.827 ********* 2025-08-29 15:16:43.028415 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:16:43.028445 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:16:43.028451 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:16:43.028456 | orchestrator | 2025-08-29 15:16:43.028462 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 15:16:43.028468 | orchestrator | Friday 29 August 2025 15:11:13 +0000 (0:00:02.076) 0:05:51.903 ********* 2025-08-29 15:16:43.028474 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:16:43.028479 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:16:43.028485 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:16:43.028491 | orchestrator | 2025-08-29 15:16:43.028497 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 15:16:43.028502 | orchestrator | Friday 29 August 2025 15:11:15 +0000 (0:00:02.420) 0:05:54.324 ********* 2025-08-29 15:16:43.028508 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:16:43.028520 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:16:43.028526 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:16:43.028531 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 15:16:43.028537 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 15:16:43.028543 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 15:16:43.028549 | orchestrator | 2025-08-29 15:16:43.028555 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 15:16:43.028561 | orchestrator | Friday 29 August 2025 15:11:25 +0000 (0:00:09.999) 0:06:04.324 ********* 2025-08-29 15:16:43.028566 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.028571 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.028577 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.028583 | orchestrator | 2025-08-29 15:16:43.028589 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 15:16:43.028596 | orchestrator | Friday 29 August 2025 15:11:26 +0000 (0:00:00.685) 0:06:05.009 ********* 2025-08-29 15:16:43.028601 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.028607 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.028612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.028617 | orchestrator | 2025-08-29 15:16:43.028623 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 15:16:43.028628 | orchestrator | Friday 29 August 2025 15:11:27 +0000 (0:00:00.841) 0:06:05.850 ********* 2025-08-29 15:16:43.028633 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.028638 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.028644 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.028650 | orchestrator | 2025-08-29 15:16:43.028655 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 15:16:43.028661 | orchestrator | Friday 29 August 2025 15:11:32 +0000 (0:00:05.152) 0:06:11.003 ********* 2025-08-29 15:16:43.028668 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:16:43.028675 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:16:43.028681 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:16:43.028687 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:16:43.028693 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:16:43.028701 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:16:43.028706 | orchestrator | 2025-08-29 15:16:43.028712 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 15:16:43.028717 | orchestrator | Friday 29 August 2025 15:11:38 +0000 (0:00:06.062) 0:06:17.066 ********* 2025-08-29 15:16:43.028723 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:16:43.028729 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:16:43.028735 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:16:43.028741 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:16:43.028747 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.028753 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:16:43.028758 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.028765 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:16:43.028845 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.028858 | orchestrator | 2025-08-29 15:16:43.028879 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 15:16:43.028894 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:08.685) 0:06:25.752 ********* 2025-08-29 15:16:43.028902 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.028908 | orchestrator | 2025-08-29 15:16:43.028913 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 15:16:43.028920 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:00.416) 0:06:26.169 ********* 2025-08-29 15:16:43.028925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.028931 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.028938 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.028943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.028965 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.028972 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.028978 | orchestrator | 2025-08-29 15:16:43.028984 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 15:16:43.029019 | orchestrator | Friday 29 August 2025 15:11:50 +0000 (0:00:02.518) 0:06:28.687 ********* 2025-08-29 15:16:43.029026 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:16:43.029032 | orchestrator | 2025-08-29 15:16:43.029038 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 15:16:43.029043 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:01.434) 0:06:30.122 ********* 2025-08-29 15:16:43.029049 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.029055 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.029061 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.029067 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.029072 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.029078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.029085 | orchestrator | 2025-08-29 15:16:43.029091 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 15:16:43.029096 | orchestrator | Friday 29 August 2025 15:11:52 +0000 (0:00:00.901) 0:06:31.024 ********* 2025-08-29 15:16:43.029104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029272 | orchestrator | 2025-08-29 15:16:43.029280 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 15:16:43.029286 | orchestrator | Friday 29 August 2025 15:12:01 +0000 (0:00:09.362) 0:06:40.386 ********* 2025-08-29 15:16:43.029363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.029410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.029420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.029426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.029439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.029451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.029507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.029606 | orchestrator | 2025-08-29 15:16:43.029612 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 15:16:43.029619 | orchestrator | Friday 29 August 2025 15:12:19 +0000 (0:00:17.456) 0:06:57.843 ********* 2025-08-29 15:16:43.029625 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.029636 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.029642 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.029647 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.029652 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.029658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.029665 | orchestrator | 2025-08-29 15:16:43.029671 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 15:16:43.029678 | orchestrator | Friday 29 August 2025 15:12:25 +0000 (0:00:05.969) 0:07:03.812 ********* 2025-08-29 15:16:43.029685 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:16:43.029690 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:16:43.029696 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:16:43.029703 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:16:43.029708 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:16:43.029714 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:16:43.029719 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:16:43.029726 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.029732 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:16:43.029737 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:16:43.029743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.029749 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:16:43.029755 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.029761 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:16:43.029766 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:16:43.029772 | orchestrator | 2025-08-29 15:16:43.029778 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 15:16:43.029784 | orchestrator | Friday 29 August 2025 15:12:33 +0000 (0:00:08.244) 0:07:12.057 ********* 2025-08-29 15:16:43.029794 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.029801 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.029806 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.029811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.029816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.029822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.029828 | orchestrator | 2025-08-29 15:16:43.029833 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 15:16:43.029838 | orchestrator | Friday 29 August 2025 15:12:35 +0000 (0:00:01.691) 0:07:13.749 ********* 2025-08-29 15:16:43.029844 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:16:43.029923 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:16:43.029991 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:16:43.029999 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:16:43.030005 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:16:43.030011 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:16:43.030063 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:43.030070 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:43.030077 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:43.030083 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:43.030089 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:43.030095 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:43.030101 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030107 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:43.030113 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:43.030118 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030125 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:43.030131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030137 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:43.030143 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:43.030148 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:43.030154 | orchestrator | 2025-08-29 15:16:43.030160 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 15:16:43.030166 | orchestrator | Friday 29 August 2025 15:12:48 +0000 (0:00:13.189) 0:07:26.939 ********* 2025-08-29 15:16:43.030173 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:16:43.030179 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:16:43.030185 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:16:43.030191 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:16:43.030197 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:16:43.030202 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:16:43.030209 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:16:43.030215 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:16:43.030221 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:16:43.030227 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:16:43.030234 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:16:43.030240 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:16:43.030247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030252 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:16:43.030264 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:16:43.030268 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:16:43.030276 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030280 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:16:43.030284 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:16:43.030288 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030292 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:16:43.030295 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:16:43.030299 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:16:43.030326 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:16:43.030331 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:16:43.030335 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:16:43.030338 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:16:43.030342 | orchestrator | 2025-08-29 15:16:43.030346 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 15:16:43.030350 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:13.059) 0:07:39.998 ********* 2025-08-29 15:16:43.030354 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.030358 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.030362 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.030365 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030369 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030377 | orchestrator | 2025-08-29 15:16:43.030380 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 15:16:43.030384 | orchestrator | Friday 29 August 2025 15:13:02 +0000 (0:00:00.907) 0:07:40.906 ********* 2025-08-29 15:16:43.030420 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.030426 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.030430 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.030434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030446 | orchestrator | 2025-08-29 15:16:43.030450 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 15:16:43.030453 | orchestrator | Friday 29 August 2025 15:13:03 +0000 (0:00:01.144) 0:07:42.050 ********* 2025-08-29 15:16:43.030457 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030471 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030477 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030483 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.030537 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.030553 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.030557 | orchestrator | 2025-08-29 15:16:43.030561 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 15:16:43.030565 | orchestrator | Friday 29 August 2025 15:13:05 +0000 (0:00:02.526) 0:07:44.577 ********* 2025-08-29 15:16:43.030570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.030589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.030599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.030603 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.030624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.030629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.030633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.030644 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.030649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:43.030655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:43.030673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.030678 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.030682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.030686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.030690 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.030707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.030713 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:43.030736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:43.030741 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030745 | orchestrator | 2025-08-29 15:16:43.030749 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 15:16:43.030753 | orchestrator | Friday 29 August 2025 15:13:09 +0000 (0:00:04.065) 0:07:48.642 ********* 2025-08-29 15:16:43.030756 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:16:43.030760 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:16:43.030764 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.030768 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:16:43.030772 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:16:43.030776 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.030779 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:16:43.030783 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:16:43.030787 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.030790 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:16:43.030795 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:16:43.030798 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.030802 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:16:43.030806 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:16:43.030810 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.030817 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:16:43.030822 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:16:43.030825 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.030829 | orchestrator | 2025-08-29 15:16:43.030833 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 15:16:43.030837 | orchestrator | Friday 29 August 2025 15:13:10 +0000 (0:00:00.914) 0:07:49.557 ********* 2025-08-29 15:16:43.030841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.030948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.031252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:43.031264 | orchestrator | 2025-08-29 15:16:43.031268 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:43.031272 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:03.257) 0:07:52.814 ********* 2025-08-29 15:16:43.031276 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.031280 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.031284 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.031350 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.031356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.031360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.031364 | orchestrator | 2025-08-29 15:16:43.031368 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:43.031372 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:00.681) 0:07:53.496 ********* 2025-08-29 15:16:43.031384 | orchestrator | 2025-08-29 15:16:43.031389 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:43.031393 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:00.161) 0:07:53.657 ********* 2025-08-29 15:16:43.031397 | orchestrator | 2025-08-29 15:16:43.031401 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:43.031404 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:00.165) 0:07:53.823 ********* 2025-08-29 15:16:43.031408 | orchestrator | 2025-08-29 15:16:43.031412 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:43.031416 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:00.366) 0:07:54.190 ********* 2025-08-29 15:16:43.031420 | orchestrator | 2025-08-29 15:16:43.031424 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:43.031427 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:00.144) 0:07:54.334 ********* 2025-08-29 15:16:43.031431 | orchestrator | 2025-08-29 15:16:43.031435 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:43.031439 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:00.154) 0:07:54.488 ********* 2025-08-29 15:16:43.031443 | orchestrator | 2025-08-29 15:16:43.031447 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 15:16:43.031451 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:00.168) 0:07:54.657 ********* 2025-08-29 15:16:43.031454 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.031458 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.031462 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.031466 | orchestrator | 2025-08-29 15:16:43.031469 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 15:16:43.031473 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:10.570) 0:08:05.228 ********* 2025-08-29 15:16:43.031477 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.031481 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.031485 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.031488 | orchestrator | 2025-08-29 15:16:43.031492 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 15:16:43.031496 | orchestrator | Friday 29 August 2025 15:13:39 +0000 (0:00:13.108) 0:08:18.336 ********* 2025-08-29 15:16:43.031500 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.031504 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.031507 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.031511 | orchestrator | 2025-08-29 15:16:43.031515 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 15:16:43.031519 | orchestrator | Friday 29 August 2025 15:14:04 +0000 (0:00:24.841) 0:08:43.177 ********* 2025-08-29 15:16:43.031572 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.031578 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.031582 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.031586 | orchestrator | 2025-08-29 15:16:43.031590 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 15:16:43.031593 | orchestrator | Friday 29 August 2025 15:14:46 +0000 (0:00:41.686) 0:09:24.863 ********* 2025-08-29 15:16:43.031597 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.031601 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.031605 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.031704 | orchestrator | 2025-08-29 15:16:43.031712 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 15:16:43.031716 | orchestrator | Friday 29 August 2025 15:14:47 +0000 (0:00:01.016) 0:09:25.880 ********* 2025-08-29 15:16:43.031720 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.031724 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.031728 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.031732 | orchestrator | 2025-08-29 15:16:43.031735 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 15:16:43.031747 | orchestrator | Friday 29 August 2025 15:14:48 +0000 (0:00:01.585) 0:09:27.465 ********* 2025-08-29 15:16:43.031751 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:43.031755 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:43.031759 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:43.031763 | orchestrator | 2025-08-29 15:16:43.031767 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 15:16:43.031771 | orchestrator | Friday 29 August 2025 15:15:22 +0000 (0:00:33.475) 0:10:00.941 ********* 2025-08-29 15:16:43.031775 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.031779 | orchestrator | 2025-08-29 15:16:43.031783 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 15:16:43.031786 | orchestrator | Friday 29 August 2025 15:15:22 +0000 (0:00:00.147) 0:10:01.088 ********* 2025-08-29 15:16:43.031794 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.031798 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.031801 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.031805 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.031809 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.031813 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 15:16:43.031817 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:43.031821 | orchestrator | 2025-08-29 15:16:43.031825 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 15:16:43.031829 | orchestrator | Friday 29 August 2025 15:15:47 +0000 (0:00:24.789) 0:10:25.878 ********* 2025-08-29 15:16:43.031833 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.031837 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.031840 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.031844 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.031864 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.031869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.031872 | orchestrator | 2025-08-29 15:16:43.031876 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 15:16:43.031880 | orchestrator | Friday 29 August 2025 15:15:59 +0000 (0:00:12.336) 0:10:38.214 ********* 2025-08-29 15:16:43.031883 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.031887 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.031891 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.031895 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.031899 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.031902 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-08-29 15:16:43.031906 | orchestrator | 2025-08-29 15:16:43.031910 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:16:43.031914 | orchestrator | Friday 29 August 2025 15:16:06 +0000 (0:00:06.475) 0:10:44.690 ********* 2025-08-29 15:16:43.031918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:43.031922 | orchestrator | 2025-08-29 15:16:43.031925 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:16:43.031929 | orchestrator | Friday 29 August 2025 15:16:18 +0000 (0:00:12.365) 0:10:57.056 ********* 2025-08-29 15:16:43.031933 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:43.031937 | orchestrator | 2025-08-29 15:16:43.031940 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 15:16:43.031944 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:01.667) 0:10:58.723 ********* 2025-08-29 15:16:43.031948 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.031968 | orchestrator | 2025-08-29 15:16:43.031973 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 15:16:43.031979 | orchestrator | Friday 29 August 2025 15:16:21 +0000 (0:00:01.551) 0:11:00.275 ********* 2025-08-29 15:16:43.031991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:43.031995 | orchestrator | 2025-08-29 15:16:43.031999 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 15:16:43.032003 | orchestrator | Friday 29 August 2025 15:16:32 +0000 (0:00:10.855) 0:11:11.131 ********* 2025-08-29 15:16:43.032007 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:16:43.032011 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:16:43.032015 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:16:43.032019 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.032022 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:43.032026 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:43.032030 | orchestrator | 2025-08-29 15:16:43.032034 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 15:16:43.032037 | orchestrator | 2025-08-29 15:16:43.032041 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 15:16:43.032045 | orchestrator | Friday 29 August 2025 15:16:34 +0000 (0:00:02.106) 0:11:13.237 ********* 2025-08-29 15:16:43.032049 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.032053 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.032056 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.032060 | orchestrator | 2025-08-29 15:16:43.032064 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 15:16:43.032067 | orchestrator | 2025-08-29 15:16:43.032071 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 15:16:43.032075 | orchestrator | Friday 29 August 2025 15:16:35 +0000 (0:00:01.061) 0:11:14.299 ********* 2025-08-29 15:16:43.032079 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032082 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032086 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032090 | orchestrator | 2025-08-29 15:16:43.032093 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 15:16:43.032097 | orchestrator | 2025-08-29 15:16:43.032101 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 15:16:43.032105 | orchestrator | Friday 29 August 2025 15:16:36 +0000 (0:00:00.895) 0:11:15.195 ********* 2025-08-29 15:16:43.032109 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 15:16:43.032113 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:16:43.032116 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:16:43.032121 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 15:16:43.032125 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 15:16:43.032129 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:43.032133 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:43.032137 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 15:16:43.032141 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:16:43.032144 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:16:43.032151 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 15:16:43.032155 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 15:16:43.032159 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:43.032162 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:43.032166 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 15:16:43.032170 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:16:43.032174 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:16:43.032178 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 15:16:43.032181 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 15:16:43.032185 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:43.032193 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:43.032197 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 15:16:43.032217 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:16:43.032221 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:16:43.032225 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 15:16:43.032229 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 15:16:43.032232 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:43.032236 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032240 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 15:16:43.032244 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:16:43.032247 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:16:43.032251 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 15:16:43.032255 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 15:16:43.032259 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:43.032263 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032266 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 15:16:43.032270 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:16:43.032274 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:16:43.032278 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 15:16:43.032281 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 15:16:43.032285 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:43.032289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032292 | orchestrator | 2025-08-29 15:16:43.032296 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 15:16:43.032300 | orchestrator | 2025-08-29 15:16:43.032303 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 15:16:43.032307 | orchestrator | Friday 29 August 2025 15:16:38 +0000 (0:00:01.662) 0:11:16.857 ********* 2025-08-29 15:16:43.032311 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 15:16:43.032315 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 15:16:43.032319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032323 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 15:16:43.032326 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 15:16:43.032330 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032334 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 15:16:43.032337 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 15:16:43.032341 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032345 | orchestrator | 2025-08-29 15:16:43.032349 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 15:16:43.032353 | orchestrator | 2025-08-29 15:16:43.032357 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 15:16:43.032362 | orchestrator | Friday 29 August 2025 15:16:38 +0000 (0:00:00.682) 0:11:17.540 ********* 2025-08-29 15:16:43.032366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032370 | orchestrator | 2025-08-29 15:16:43.032375 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 15:16:43.032379 | orchestrator | 2025-08-29 15:16:43.032384 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 15:16:43.032388 | orchestrator | Friday 29 August 2025 15:16:39 +0000 (0:00:01.080) 0:11:18.620 ********* 2025-08-29 15:16:43.032392 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032406 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032411 | orchestrator | 2025-08-29 15:16:43.032415 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:43.032420 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:43.032426 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 15:16:43.032431 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:16:43.032435 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:16:43.032442 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 15:16:43.032446 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 15:16:43.032451 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 15:16:43.032455 | orchestrator | 2025-08-29 15:16:43.032460 | orchestrator | 2025-08-29 15:16:43.032464 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:43.032468 | orchestrator | Friday 29 August 2025 15:16:40 +0000 (0:00:00.599) 0:11:19.220 ********* 2025-08-29 15:16:43.032473 | orchestrator | =============================================================================== 2025-08-29 15:16:43.032490 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.69s 2025-08-29 15:16:43.032495 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 33.48s 2025-08-29 15:16:43.032499 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.11s 2025-08-29 15:16:43.032503 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.68s 2025-08-29 15:16:43.032507 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.84s 2025-08-29 15:16:43.032512 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.79s 2025-08-29 15:16:43.032516 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.35s 2025-08-29 15:16:43.032521 | orchestrator | nova-cell : Copying over nova.conf ------------------------------------- 17.46s 2025-08-29 15:16:43.032525 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.21s 2025-08-29 15:16:43.032530 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.72s 2025-08-29 15:16:43.032534 | orchestrator | nova : Restart nova-api container -------------------------------------- 16.34s 2025-08-29 15:16:43.032538 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.86s 2025-08-29 15:16:43.032543 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.44s 2025-08-29 15:16:43.032547 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.15s 2025-08-29 15:16:43.032551 | orchestrator | nova-cell : Copying over libvirt SASL configuration -------------------- 13.19s 2025-08-29 15:16:43.032555 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.11s 2025-08-29 15:16:43.032559 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 13.06s 2025-08-29 15:16:43.032562 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.37s 2025-08-29 15:16:43.032566 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.34s 2025-08-29 15:16:43.032570 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.86s 2025-08-29 15:16:43.032577 | orchestrator | 2025-08-29 15:16:43.032581 | orchestrator | 2025-08-29 15:16:43.032584 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:43.032588 | orchestrator | 2025-08-29 15:16:43.032592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:43.032595 | orchestrator | Friday 29 August 2025 15:14:13 +0000 (0:00:01.061) 0:00:01.061 ********* 2025-08-29 15:16:43.032599 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.032603 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:43.032607 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:43.032610 | orchestrator | 2025-08-29 15:16:43.032614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:43.032618 | orchestrator | Friday 29 August 2025 15:14:14 +0000 (0:00:00.929) 0:00:01.991 ********* 2025-08-29 15:16:43.032622 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 15:16:43.032626 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 15:16:43.032630 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 15:16:43.032634 | orchestrator | 2025-08-29 15:16:43.032638 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 15:16:43.032642 | orchestrator | 2025-08-29 15:16:43.032646 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:16:43.032650 | orchestrator | Friday 29 August 2025 15:14:15 +0000 (0:00:01.235) 0:00:03.226 ********* 2025-08-29 15:16:43.032653 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.032657 | orchestrator | 2025-08-29 15:16:43.032661 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 15:16:43.032665 | orchestrator | Friday 29 August 2025 15:14:16 +0000 (0:00:00.898) 0:00:04.124 ********* 2025-08-29 15:16:43.032672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032704 | orchestrator | 2025-08-29 15:16:43.032708 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 15:16:43.032711 | orchestrator | Friday 29 August 2025 15:14:17 +0000 (0:00:01.369) 0:00:05.494 ********* 2025-08-29 15:16:43.032715 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 15:16:43.032719 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 15:16:43.032723 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:43.032727 | orchestrator | 2025-08-29 15:16:43.032731 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:16:43.032734 | orchestrator | Friday 29 August 2025 15:14:18 +0000 (0:00:01.038) 0:00:06.532 ********* 2025-08-29 15:16:43.032738 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:43.032742 | orchestrator | 2025-08-29 15:16:43.032746 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 15:16:43.032749 | orchestrator | Friday 29 August 2025 15:14:20 +0000 (0:00:01.192) 0:00:07.725 ********* 2025-08-29 15:16:43.032753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032768 | orchestrator | 2025-08-29 15:16:43.032771 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:43.032775 | orchestrator | Friday 29 August 2025 15:14:22 +0000 (0:00:02.014) 0:00:09.739 ********* 2025-08-29 15:16:43.032793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:43.032801 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:43.032809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:43.032817 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032821 | orchestrator | 2025-08-29 15:16:43.032825 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 15:16:43.032828 | orchestrator | Friday 29 August 2025 15:14:22 +0000 (0:00:00.493) 0:00:10.233 ********* 2025-08-29 15:16:43.032832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:43.032836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:43.032847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:43.032869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032873 | orchestrator | 2025-08-29 15:16:43.032876 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 15:16:43.032880 | orchestrator | Friday 29 August 2025 15:14:23 +0000 (0:00:01.149) 0:00:11.383 ********* 2025-08-29 15:16:43.032884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032896 | orchestrator | 2025-08-29 15:16:43.032900 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 15:16:43.032904 | orchestrator | Friday 29 August 2025 15:14:25 +0000 (0:00:01.555) 0:00:12.938 ********* 2025-08-29 15:16:43.032908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.032942 | orchestrator | 2025-08-29 15:16:43.032945 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 15:16:43.032949 | orchestrator | Friday 29 August 2025 15:14:27 +0000 (0:00:01.762) 0:00:14.700 ********* 2025-08-29 15:16:43.032969 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.032974 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.032977 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.032981 | orchestrator | 2025-08-29 15:16:43.032985 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 15:16:43.032989 | orchestrator | Friday 29 August 2025 15:14:27 +0000 (0:00:00.690) 0:00:15.391 ********* 2025-08-29 15:16:43.032993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:43.032997 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:43.033000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:43.033004 | orchestrator | 2025-08-29 15:16:43.033008 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 15:16:43.033012 | orchestrator | Friday 29 August 2025 15:14:29 +0000 (0:00:01.644) 0:00:17.036 ********* 2025-08-29 15:16:43.033015 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:43.033020 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:43.033023 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:43.033027 | orchestrator | 2025-08-29 15:16:43.033031 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 15:16:43.033035 | orchestrator | Friday 29 August 2025 15:14:31 +0000 (0:00:01.619) 0:00:18.655 ********* 2025-08-29 15:16:43.033039 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:43.033043 | orchestrator | 2025-08-29 15:16:43.033047 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 15:16:43.033051 | orchestrator | Friday 29 August 2025 15:14:31 +0000 (0:00:00.924) 0:00:19.580 ********* 2025-08-29 15:16:43.033054 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 15:16:43.033058 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 15:16:43.033062 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.033066 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:43.033069 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:43.033073 | orchestrator | 2025-08-29 15:16:43.033077 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 15:16:43.033081 | orchestrator | Friday 29 August 2025 15:14:32 +0000 (0:00:00.913) 0:00:20.494 ********* 2025-08-29 15:16:43.033084 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.033091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.033095 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.033099 | orchestrator | 2025-08-29 15:16:43.033102 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 15:16:43.033106 | orchestrator | Friday 29 August 2025 15:14:33 +0000 (0:00:00.771) 0:00:21.265 ********* 2025-08-29 15:16:43.033113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094577, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4891875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094577, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4891875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094577, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4891875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094644, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5129633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094644, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5129633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094644, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5129633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094586, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4919682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094586, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4919682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094586, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4919682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094645, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5149684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094645, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5149684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094645, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5149684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094604, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.49677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094604, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.49677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094604, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.49677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094638, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5109684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094638, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5109684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094638, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5109684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094576, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.487897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094576, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.487897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094576, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.487897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094581, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.489935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094581, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.489935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094581, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.489935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094588, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.493425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094588, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.493425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094588, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.493425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094632, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5059683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094632, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5059683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094632, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5059683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094641, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.511996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094641, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.511996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094641, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.511996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094583, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4919682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094583, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4919682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094583, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4919682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094637, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5089684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094637, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5089684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094637, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5089684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094608, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5059683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094608, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5059683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094608, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5059683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094597, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4963071, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094597, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4963071, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094597, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4963071, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094594, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4949064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094594, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4949064, 'gr_name': 'root', 'pw_name': 'root', 'wusr':2025-08-29 15:16:43 | INFO  | Task acaa1aa2-bc4c-4b64-83a6-2e230d0d5b7e is in state SUCCESS 2025-08-29 15:16:43.033424 | orchestrator | True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094594, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4949064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094634, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5079684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094634, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5079684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094634, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5079684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094590, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4941456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094590, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4941456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094640, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5109684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094590, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.4941456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094640, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5109684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094675, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.549969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094640, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5109684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094675, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.549969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094653, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5259686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094653, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5259686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094675, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.549969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094650, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5184448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094650, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5184448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094653, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5259686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094657, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5289767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094657, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5289767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094650, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5184448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094647, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5159686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094647, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5159686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094657, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5289767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094662, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5389688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094662, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5389688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094658, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5349689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094647, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5159686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094658, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5349689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094663, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.539969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094662, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5389688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094663, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.539969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094671, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5493238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094658, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5349689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094671, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5493238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094661, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5379689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094663, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.539969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094661, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5379689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094655, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5269687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094671, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5493238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094655, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5269687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094652, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5219686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094661, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5379689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094652, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5219686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094654, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5259686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094655, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5269687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094654, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5259686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094651, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5199685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094651, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5199685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094652, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5219686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094656, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5279686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094656, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5279686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094654, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5259686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094668, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.545969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094668, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.545969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094651, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5199685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094665, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5419688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094665, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5419688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094656, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5279686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094648, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5173059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094648, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5173059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094668, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.545969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094649, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5178983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094649, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5178983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094665, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5419688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094659, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5369687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094659, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5369687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094664, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.540969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094648, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5173059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094664, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.540969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094649, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5178983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094659, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.5369687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094664, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477248.540969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:43.033825 | orchestrator | 2025-08-29 15:16:43.033829 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 15:16:43.033833 | orchestrator | Friday 29 August 2025 15:15:17 +0000 (0:00:43.580) 0:01:04.846 ********* 2025-08-29 15:16:43.033839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.033846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.033850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:43.033854 | orchestrator | 2025-08-29 15:16:43.033858 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 15:16:43.033862 | orchestrator | Friday 29 August 2025 15:15:18 +0000 (0:00:01.082) 0:01:05.928 ********* 2025-08-29 15:16:43.033866 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.033869 | orchestrator | 2025-08-29 15:16:43.033873 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 15:16:43.033877 | orchestrator | Friday 29 August 2025 15:15:20 +0000 (0:00:02.385) 0:01:08.314 ********* 2025-08-29 15:16:43.033881 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.033885 | orchestrator | 2025-08-29 15:16:43.033888 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:43.033892 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:02.376) 0:01:10.691 ********* 2025-08-29 15:16:43.033896 | orchestrator | 2025-08-29 15:16:43.033900 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:43.033904 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:00.483) 0:01:11.175 ********* 2025-08-29 15:16:43.033907 | orchestrator | 2025-08-29 15:16:43.033911 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:43.033915 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:00.158) 0:01:11.333 ********* 2025-08-29 15:16:43.033922 | orchestrator | 2025-08-29 15:16:43.033926 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 15:16:43.033929 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:00.181) 0:01:11.515 ********* 2025-08-29 15:16:43.033933 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.033937 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.033941 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:43.033944 | orchestrator | 2025-08-29 15:16:43.033948 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 15:16:43.033971 | orchestrator | Friday 29 August 2025 15:15:26 +0000 (0:00:02.515) 0:01:14.030 ********* 2025-08-29 15:16:43.033975 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.033979 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.033983 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 15:16:43.033987 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 15:16:43.033991 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 15:16:43.033995 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.033998 | orchestrator | 2025-08-29 15:16:43.034002 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 15:16:43.034006 | orchestrator | Friday 29 August 2025 15:16:05 +0000 (0:00:38.995) 0:01:53.026 ********* 2025-08-29 15:16:43.034010 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.034039 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:43.034043 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:43.034047 | orchestrator | 2025-08-29 15:16:43.034064 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 15:16:43.034068 | orchestrator | Friday 29 August 2025 15:16:36 +0000 (0:00:30.805) 0:02:23.831 ********* 2025-08-29 15:16:43.034073 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:43.034076 | orchestrator | 2025-08-29 15:16:43.034080 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 15:16:43.034084 | orchestrator | Friday 29 August 2025 15:16:38 +0000 (0:00:02.101) 0:02:25.933 ********* 2025-08-29 15:16:43.034088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.034092 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:43.034096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:43.034100 | orchestrator | 2025-08-29 15:16:43.034104 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 15:16:43.034108 | orchestrator | Friday 29 August 2025 15:16:38 +0000 (0:00:00.678) 0:02:26.612 ********* 2025-08-29 15:16:43.034115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 15:16:43.034121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 15:16:43.034125 | orchestrator | 2025-08-29 15:16:43.034129 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 15:16:43.034133 | orchestrator | Friday 29 August 2025 15:16:41 +0000 (0:00:02.385) 0:02:28.998 ********* 2025-08-29 15:16:43.034137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:43.034140 | orchestrator | 2025-08-29 15:16:43.034144 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:43.034148 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:43.034156 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:43.034160 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:43.034164 | orchestrator | 2025-08-29 15:16:43.034168 | orchestrator | 2025-08-29 15:16:43.034172 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:43.034176 | orchestrator | Friday 29 August 2025 15:16:41 +0000 (0:00:00.299) 0:02:29.297 ********* 2025-08-29 15:16:43.034179 | orchestrator | =============================================================================== 2025-08-29 15:16:43.034183 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 43.58s 2025-08-29 15:16:43.034187 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.00s 2025-08-29 15:16:43.034191 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.81s 2025-08-29 15:16:43.034194 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.52s 2025-08-29 15:16:43.034198 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.39s 2025-08-29 15:16:43.034202 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-08-29 15:16:43.034205 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.38s 2025-08-29 15:16:43.034209 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.10s 2025-08-29 15:16:43.034213 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.01s 2025-08-29 15:16:43.034217 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.76s 2025-08-29 15:16:43.034221 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.64s 2025-08-29 15:16:43.034225 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.62s 2025-08-29 15:16:43.034228 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.56s 2025-08-29 15:16:43.034232 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.37s 2025-08-29 15:16:43.034236 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.24s 2025-08-29 15:16:43.034240 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.20s 2025-08-29 15:16:43.034243 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.15s 2025-08-29 15:16:43.034247 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2025-08-29 15:16:43.034251 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.04s 2025-08-29 15:16:43.034255 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-08-29 15:16:43.034259 | orchestrator | 2025-08-29 15:16:43 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:43.034267 | orchestrator | 2025-08-29 15:16:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:46.069553 | orchestrator | 2025-08-29 15:16:46 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:46.069639 | orchestrator | 2025-08-29 15:16:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:49.105836 | orchestrator | 2025-08-29 15:16:49 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:49.105921 | orchestrator | 2025-08-29 15:16:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:52.155116 | orchestrator | 2025-08-29 15:16:52 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:52.155234 | orchestrator | 2025-08-29 15:16:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:55.199048 | orchestrator | 2025-08-29 15:16:55 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:55.199167 | orchestrator | 2025-08-29 15:16:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:58.246065 | orchestrator | 2025-08-29 15:16:58 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:16:58.246139 | orchestrator | 2025-08-29 15:16:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:01.295013 | orchestrator | 2025-08-29 15:17:01 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:01.295126 | orchestrator | 2025-08-29 15:17:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:04.338407 | orchestrator | 2025-08-29 15:17:04 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:04.338525 | orchestrator | 2025-08-29 15:17:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:07.374252 | orchestrator | 2025-08-29 15:17:07 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:07.374326 | orchestrator | 2025-08-29 15:17:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:10.413854 | orchestrator | 2025-08-29 15:17:10 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:10.413928 | orchestrator | 2025-08-29 15:17:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:13.462889 | orchestrator | 2025-08-29 15:17:13 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:13.463029 | orchestrator | 2025-08-29 15:17:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:16.508410 | orchestrator | 2025-08-29 15:17:16 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:16.508521 | orchestrator | 2025-08-29 15:17:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:19.559630 | orchestrator | 2025-08-29 15:17:19 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:19.559741 | orchestrator | 2025-08-29 15:17:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:22.602705 | orchestrator | 2025-08-29 15:17:22 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:22.602790 | orchestrator | 2025-08-29 15:17:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:25.641538 | orchestrator | 2025-08-29 15:17:25 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:25.641616 | orchestrator | 2025-08-29 15:17:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:28.688487 | orchestrator | 2025-08-29 15:17:28 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:28.688593 | orchestrator | 2025-08-29 15:17:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:31.748222 | orchestrator | 2025-08-29 15:17:31 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:31.748351 | orchestrator | 2025-08-29 15:17:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:34.795158 | orchestrator | 2025-08-29 15:17:34 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:34.795244 | orchestrator | 2025-08-29 15:17:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:37.836370 | orchestrator | 2025-08-29 15:17:37 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:37.836466 | orchestrator | 2025-08-29 15:17:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:40.880516 | orchestrator | 2025-08-29 15:17:40 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:40.880614 | orchestrator | 2025-08-29 15:17:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:43.924655 | orchestrator | 2025-08-29 15:17:43 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:43.924746 | orchestrator | 2025-08-29 15:17:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:46.972680 | orchestrator | 2025-08-29 15:17:46 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:46.972799 | orchestrator | 2025-08-29 15:17:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:50.024716 | orchestrator | 2025-08-29 15:17:50 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:50.024801 | orchestrator | 2025-08-29 15:17:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:53.070405 | orchestrator | 2025-08-29 15:17:53 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:53.070500 | orchestrator | 2025-08-29 15:17:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:56.110582 | orchestrator | 2025-08-29 15:17:56 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:56.110681 | orchestrator | 2025-08-29 15:17:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:59.162275 | orchestrator | 2025-08-29 15:17:59 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:17:59.162349 | orchestrator | 2025-08-29 15:17:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:02.214183 | orchestrator | 2025-08-29 15:18:02 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:02.214294 | orchestrator | 2025-08-29 15:18:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:05.267359 | orchestrator | 2025-08-29 15:18:05 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:05.267457 | orchestrator | 2025-08-29 15:18:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:08.310380 | orchestrator | 2025-08-29 15:18:08 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:08.310453 | orchestrator | 2025-08-29 15:18:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:11.360637 | orchestrator | 2025-08-29 15:18:11 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:11.360711 | orchestrator | 2025-08-29 15:18:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:14.414772 | orchestrator | 2025-08-29 15:18:14 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:14.414857 | orchestrator | 2025-08-29 15:18:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:17.465482 | orchestrator | 2025-08-29 15:18:17 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:17.465573 | orchestrator | 2025-08-29 15:18:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:20.517953 | orchestrator | 2025-08-29 15:18:20 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:20.518093 | orchestrator | 2025-08-29 15:18:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:23.570703 | orchestrator | 2025-08-29 15:18:23 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:23.570803 | orchestrator | 2025-08-29 15:18:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:26.623187 | orchestrator | 2025-08-29 15:18:26 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:26.623297 | orchestrator | 2025-08-29 15:18:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:29.672573 | orchestrator | 2025-08-29 15:18:29 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:29.673270 | orchestrator | 2025-08-29 15:18:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:32.717470 | orchestrator | 2025-08-29 15:18:32 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:32.717601 | orchestrator | 2025-08-29 15:18:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:35.770774 | orchestrator | 2025-08-29 15:18:35 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:35.770876 | orchestrator | 2025-08-29 15:18:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:38.824285 | orchestrator | 2025-08-29 15:18:38 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:38.824455 | orchestrator | 2025-08-29 15:18:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:41.867120 | orchestrator | 2025-08-29 15:18:41 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:41.867269 | orchestrator | 2025-08-29 15:18:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:44.919713 | orchestrator | 2025-08-29 15:18:44 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:44.919795 | orchestrator | 2025-08-29 15:18:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:47.976496 | orchestrator | 2025-08-29 15:18:47 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:47.976605 | orchestrator | 2025-08-29 15:18:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:51.032689 | orchestrator | 2025-08-29 15:18:51 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:51.032798 | orchestrator | 2025-08-29 15:18:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:54.070293 | orchestrator | 2025-08-29 15:18:54 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:54.070378 | orchestrator | 2025-08-29 15:18:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:57.101604 | orchestrator | 2025-08-29 15:18:57 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:18:57.101711 | orchestrator | 2025-08-29 15:18:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:00.138707 | orchestrator | 2025-08-29 15:19:00 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:00.138849 | orchestrator | 2025-08-29 15:19:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:03.191931 | orchestrator | 2025-08-29 15:19:03 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:03.192036 | orchestrator | 2025-08-29 15:19:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:06.235656 | orchestrator | 2025-08-29 15:19:06 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:06.235744 | orchestrator | 2025-08-29 15:19:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:09.290801 | orchestrator | 2025-08-29 15:19:09 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:09.290909 | orchestrator | 2025-08-29 15:19:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:12.337743 | orchestrator | 2025-08-29 15:19:12 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:12.337843 | orchestrator | 2025-08-29 15:19:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:15.376595 | orchestrator | 2025-08-29 15:19:15 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:15.376659 | orchestrator | 2025-08-29 15:19:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:18.424872 | orchestrator | 2025-08-29 15:19:18 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:18.424940 | orchestrator | 2025-08-29 15:19:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:21.475681 | orchestrator | 2025-08-29 15:19:21 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:21.475786 | orchestrator | 2025-08-29 15:19:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:24.533879 | orchestrator | 2025-08-29 15:19:24 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:24.533993 | orchestrator | 2025-08-29 15:19:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:27.586842 | orchestrator | 2025-08-29 15:19:27 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:27.586951 | orchestrator | 2025-08-29 15:19:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:30.640901 | orchestrator | 2025-08-29 15:19:30 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:30.641000 | orchestrator | 2025-08-29 15:19:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:33.690449 | orchestrator | 2025-08-29 15:19:33 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:33.690545 | orchestrator | 2025-08-29 15:19:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:36.747565 | orchestrator | 2025-08-29 15:19:36 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:36.748841 | orchestrator | 2025-08-29 15:19:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:39.789064 | orchestrator | 2025-08-29 15:19:39 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:39.789305 | orchestrator | 2025-08-29 15:19:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:42.818440 | orchestrator | 2025-08-29 15:19:42 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:42.818640 | orchestrator | 2025-08-29 15:19:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:45.867898 | orchestrator | 2025-08-29 15:19:45 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:45.868005 | orchestrator | 2025-08-29 15:19:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:48.919748 | orchestrator | 2025-08-29 15:19:48 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:48.919883 | orchestrator | 2025-08-29 15:19:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:51.966747 | orchestrator | 2025-08-29 15:19:51 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:51.967382 | orchestrator | 2025-08-29 15:19:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:55.031480 | orchestrator | 2025-08-29 15:19:55 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:55.031572 | orchestrator | 2025-08-29 15:19:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:58.083972 | orchestrator | 2025-08-29 15:19:58 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:19:58.084081 | orchestrator | 2025-08-29 15:19:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:20:01.130860 | orchestrator | 2025-08-29 15:20:01 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:20:01.130972 | orchestrator | 2025-08-29 15:20:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:20:04.171644 | orchestrator | 2025-08-29 15:20:04 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:20:04.171721 | orchestrator | 2025-08-29 15:20:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:20:07.224598 | orchestrator | 2025-08-29 15:20:07 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state STARTED 2025-08-29 15:20:07.224691 | orchestrator | 2025-08-29 15:20:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:20:10.270995 | orchestrator | 2025-08-29 15:20:10 | INFO  | Task 3d335aa5-c40b-481a-b25a-3319180a9a5b is in state SUCCESS 2025-08-29 15:20:10.272505 | orchestrator | 2025-08-29 15:20:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:10.274225 | orchestrator | 2025-08-29 15:20:10.274303 | orchestrator | 2025-08-29 15:20:10.274322 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:20:10.274336 | orchestrator | 2025-08-29 15:20:10.274351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:20:10.274367 | orchestrator | Friday 29 August 2025 15:15:07 +0000 (0:00:00.359) 0:00:00.359 ********* 2025-08-29 15:20:10.274448 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.274469 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:20:10.274484 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:20:10.274498 | orchestrator | 2025-08-29 15:20:10.274513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:20:10.274527 | orchestrator | Friday 29 August 2025 15:15:07 +0000 (0:00:00.361) 0:00:00.721 ********* 2025-08-29 15:20:10.274541 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 15:20:10.274556 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 15:20:10.274570 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 15:20:10.274584 | orchestrator | 2025-08-29 15:20:10.274599 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 15:20:10.274612 | orchestrator | 2025-08-29 15:20:10.274627 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:20:10.274642 | orchestrator | Friday 29 August 2025 15:15:08 +0000 (0:00:00.577) 0:00:01.298 ********* 2025-08-29 15:20:10.274657 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:20:10.274673 | orchestrator | 2025-08-29 15:20:10.274687 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 15:20:10.274702 | orchestrator | Friday 29 August 2025 15:15:09 +0000 (0:00:00.716) 0:00:02.014 ********* 2025-08-29 15:20:10.274718 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 15:20:10.274734 | orchestrator | 2025-08-29 15:20:10.274748 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 15:20:10.274763 | orchestrator | Friday 29 August 2025 15:15:13 +0000 (0:00:03.806) 0:00:05.821 ********* 2025-08-29 15:20:10.274778 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 15:20:10.274794 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 15:20:10.274809 | orchestrator | 2025-08-29 15:20:10.274847 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 15:20:10.274883 | orchestrator | Friday 29 August 2025 15:15:19 +0000 (0:00:06.659) 0:00:12.480 ********* 2025-08-29 15:20:10.274892 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:20:10.274901 | orchestrator | 2025-08-29 15:20:10.274910 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 15:20:10.274918 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:03.443) 0:00:15.925 ********* 2025-08-29 15:20:10.274927 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:20:10.274936 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:20:10.274945 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:20:10.274954 | orchestrator | 2025-08-29 15:20:10.274962 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 15:20:10.274971 | orchestrator | Friday 29 August 2025 15:15:31 +0000 (0:00:08.519) 0:00:24.444 ********* 2025-08-29 15:20:10.274979 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:20:10.274988 | orchestrator | 2025-08-29 15:20:10.274996 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 15:20:10.275005 | orchestrator | Friday 29 August 2025 15:15:35 +0000 (0:00:03.461) 0:00:27.906 ********* 2025-08-29 15:20:10.275013 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:20:10.275022 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:20:10.275030 | orchestrator | 2025-08-29 15:20:10.275039 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 15:20:10.275047 | orchestrator | Friday 29 August 2025 15:15:43 +0000 (0:00:07.866) 0:00:35.772 ********* 2025-08-29 15:20:10.275056 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 15:20:10.275064 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 15:20:10.275072 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 15:20:10.275081 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 15:20:10.275090 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 15:20:10.275098 | orchestrator | 2025-08-29 15:20:10.275106 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:20:10.275115 | orchestrator | Friday 29 August 2025 15:15:58 +0000 (0:00:15.814) 0:00:51.586 ********* 2025-08-29 15:20:10.275123 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:20:10.275132 | orchestrator | 2025-08-29 15:20:10.275140 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 15:20:10.275149 | orchestrator | Friday 29 August 2025 15:15:59 +0000 (0:00:00.731) 0:00:52.318 ********* 2025-08-29 15:20:10.275158 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275166 | orchestrator | 2025-08-29 15:20:10.275175 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-08-29 15:20:10.275183 | orchestrator | Friday 29 August 2025 15:16:04 +0000 (0:00:05.307) 0:00:57.625 ********* 2025-08-29 15:20:10.275192 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275200 | orchestrator | 2025-08-29 15:20:10.275209 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 15:20:10.275299 | orchestrator | Friday 29 August 2025 15:16:08 +0000 (0:00:03.764) 0:01:01.390 ********* 2025-08-29 15:20:10.275311 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.275320 | orchestrator | 2025-08-29 15:20:10.275329 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-08-29 15:20:10.275338 | orchestrator | Friday 29 August 2025 15:16:11 +0000 (0:00:03.087) 0:01:04.478 ********* 2025-08-29 15:20:10.275347 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 15:20:10.275356 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 15:20:10.275365 | orchestrator | 2025-08-29 15:20:10.275374 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-08-29 15:20:10.275421 | orchestrator | Friday 29 August 2025 15:16:21 +0000 (0:00:09.695) 0:01:14.173 ********* 2025-08-29 15:20:10.275439 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-08-29 15:20:10.275454 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-08-29 15:20:10.275471 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-08-29 15:20:10.275524 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-08-29 15:20:10.275534 | orchestrator | 2025-08-29 15:20:10.275542 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-08-29 15:20:10.275551 | orchestrator | Friday 29 August 2025 15:16:38 +0000 (0:00:16.828) 0:01:31.002 ********* 2025-08-29 15:20:10.275560 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275568 | orchestrator | 2025-08-29 15:20:10.275577 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-08-29 15:20:10.275585 | orchestrator | Friday 29 August 2025 15:16:42 +0000 (0:00:04.617) 0:01:35.620 ********* 2025-08-29 15:20:10.275594 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275603 | orchestrator | 2025-08-29 15:20:10.275611 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-08-29 15:20:10.275620 | orchestrator | Friday 29 August 2025 15:16:48 +0000 (0:00:05.253) 0:01:40.873 ********* 2025-08-29 15:20:10.275635 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.275644 | orchestrator | 2025-08-29 15:20:10.275653 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-08-29 15:20:10.275661 | orchestrator | Friday 29 August 2025 15:16:48 +0000 (0:00:00.262) 0:01:41.136 ********* 2025-08-29 15:20:10.275670 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275678 | orchestrator | 2025-08-29 15:20:10.275687 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:20:10.275695 | orchestrator | Friday 29 August 2025 15:16:52 +0000 (0:00:04.340) 0:01:45.476 ********* 2025-08-29 15:20:10.275704 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:20:10.275713 | orchestrator | 2025-08-29 15:20:10.275722 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-08-29 15:20:10.275730 | orchestrator | Friday 29 August 2025 15:16:54 +0000 (0:00:01.396) 0:01:46.872 ********* 2025-08-29 15:20:10.275739 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.275753 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275767 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.275783 | orchestrator | 2025-08-29 15:20:10.275797 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-08-29 15:20:10.275812 | orchestrator | Friday 29 August 2025 15:16:58 +0000 (0:00:04.843) 0:01:51.715 ********* 2025-08-29 15:20:10.275827 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.275840 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275854 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.275868 | orchestrator | 2025-08-29 15:20:10.275882 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-08-29 15:20:10.275895 | orchestrator | Friday 29 August 2025 15:17:03 +0000 (0:00:04.685) 0:01:56.401 ********* 2025-08-29 15:20:10.275910 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.275924 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.275938 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.275953 | orchestrator | 2025-08-29 15:20:10.275968 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-08-29 15:20:10.275984 | orchestrator | Friday 29 August 2025 15:17:04 +0000 (0:00:00.842) 0:01:57.244 ********* 2025-08-29 15:20:10.276012 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276026 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:20:10.276043 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:20:10.276060 | orchestrator | 2025-08-29 15:20:10.276075 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-08-29 15:20:10.276091 | orchestrator | Friday 29 August 2025 15:17:07 +0000 (0:00:02.586) 0:01:59.830 ********* 2025-08-29 15:20:10.276103 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.276113 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.276121 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.276130 | orchestrator | 2025-08-29 15:20:10.276138 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-08-29 15:20:10.276147 | orchestrator | Friday 29 August 2025 15:17:08 +0000 (0:00:01.358) 0:02:01.189 ********* 2025-08-29 15:20:10.276155 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.276164 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.276172 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.276181 | orchestrator | 2025-08-29 15:20:10.276189 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-08-29 15:20:10.276198 | orchestrator | Friday 29 August 2025 15:17:09 +0000 (0:00:01.282) 0:02:02.471 ********* 2025-08-29 15:20:10.276207 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.276215 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.276224 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.276233 | orchestrator | 2025-08-29 15:20:10.276297 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-08-29 15:20:10.276315 | orchestrator | Friday 29 August 2025 15:17:11 +0000 (0:00:02.189) 0:02:04.661 ********* 2025-08-29 15:20:10.276328 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.276373 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.276407 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.276422 | orchestrator | 2025-08-29 15:20:10.276435 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-08-29 15:20:10.276448 | orchestrator | Friday 29 August 2025 15:17:13 +0000 (0:00:01.938) 0:02:06.599 ********* 2025-08-29 15:20:10.276462 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276475 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:20:10.276488 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:20:10.276501 | orchestrator | 2025-08-29 15:20:10.276515 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-08-29 15:20:10.276529 | orchestrator | Friday 29 August 2025 15:17:14 +0000 (0:00:00.748) 0:02:07.347 ********* 2025-08-29 15:20:10.276542 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276579 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:20:10.276594 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:20:10.276607 | orchestrator | 2025-08-29 15:20:10.276621 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:20:10.276635 | orchestrator | Friday 29 August 2025 15:17:18 +0000 (0:00:03.815) 0:02:11.163 ********* 2025-08-29 15:20:10.276648 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:20:10.276662 | orchestrator | 2025-08-29 15:20:10.276676 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-08-29 15:20:10.276690 | orchestrator | Friday 29 August 2025 15:17:19 +0000 (0:00:00.912) 0:02:12.076 ********* 2025-08-29 15:20:10.276704 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276717 | orchestrator | 2025-08-29 15:20:10.276731 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 15:20:10.276743 | orchestrator | Friday 29 August 2025 15:17:23 +0000 (0:00:03.981) 0:02:16.057 ********* 2025-08-29 15:20:10.276757 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276770 | orchestrator | 2025-08-29 15:20:10.276783 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-08-29 15:20:10.276797 | orchestrator | Friday 29 August 2025 15:17:26 +0000 (0:00:03.112) 0:02:19.170 ********* 2025-08-29 15:20:10.276822 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 15:20:10.276843 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 15:20:10.276856 | orchestrator | 2025-08-29 15:20:10.276870 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-08-29 15:20:10.276883 | orchestrator | Friday 29 August 2025 15:17:33 +0000 (0:00:07.492) 0:02:26.663 ********* 2025-08-29 15:20:10.276896 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276910 | orchestrator | 2025-08-29 15:20:10.276924 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-08-29 15:20:10.276938 | orchestrator | Friday 29 August 2025 15:17:37 +0000 (0:00:03.616) 0:02:30.280 ********* 2025-08-29 15:20:10.276951 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:20:10.276965 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:20:10.276978 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:20:10.276992 | orchestrator | 2025-08-29 15:20:10.277005 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-08-29 15:20:10.277019 | orchestrator | Friday 29 August 2025 15:17:37 +0000 (0:00:00.382) 0:02:30.662 ********* 2025-08-29 15:20:10.277052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.277140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.277157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.277199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.277223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.277239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.277271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.277575 | orchestrator | 2025-08-29 15:20:10.277584 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-08-29 15:20:10.277594 | orchestrator | Friday 29 August 2025 15:17:40 +0000 (0:00:02.643) 0:02:33.306 ********* 2025-08-29 15:20:10.277603 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.277612 | orchestrator | 2025-08-29 15:20:10.277621 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-08-29 15:20:10.277630 | orchestrator | Friday 29 August 2025 15:17:40 +0000 (0:00:00.156) 0:02:33.463 ********* 2025-08-29 15:20:10.277638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.277657 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:20:10.277666 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:20:10.277675 | orchestrator | 2025-08-29 15:20:10.277684 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-08-29 15:20:10.277692 | orchestrator | Friday 29 August 2025 15:17:41 +0000 (0:00:00.678) 0:02:34.142 ********* 2025-08-29 15:20:10.277702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.277719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.277729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.277739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.277749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.277758 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.277792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.277809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.277827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.277837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.277846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.277855 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:20:10.277887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.277904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.277914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.277928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.277937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.277946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:20:10.277955 | orchestrator | 2025-08-29 15:20:10.277964 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:20:10.277973 | orchestrator | Friday 29 August 2025 15:17:42 +0000 (0:00:00.839) 0:02:34.982 ********* 2025-08-29 15:20:10.277982 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:20:10.277991 | orchestrator | 2025-08-29 15:20:10.278000 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-08-29 15:20:10.278009 | orchestrator | Friday 29 August 2025 15:17:42 +0000 (0:00:00.644) 0:02:35.626 ********* 2025-08-29 15:20:10.278081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.278147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.278175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.278190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.278206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.278221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.278236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.278447 | orchestrator | 2025-08-29 15:20:10.278462 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-08-29 15:20:10.278476 | orchestrator | Friday 29 August 2025 15:17:48 +0000 (0:00:05.712) 0:02:41.338 ********* 2025-08-29 15:20:10.278496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.278511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.278525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.278589 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.278604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.278625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.278639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.278690 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:20:10.278714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.278729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.278744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.278801 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:20:10.278814 | orchestrator | 2025-08-29 15:20:10.278828 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-08-29 15:20:10.278842 | orchestrator | Friday 29 August 2025 15:17:49 +0000 (0:00:00.808) 0:02:42.147 ********* 2025-08-29 15:20:10.278857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.278879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.278894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.278928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.278941 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.278955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.278985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.279010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.279024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.279038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.279052 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:20:10.279072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:20:10.279093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:20:10.279108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.279130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:20:10.279144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:20:10.279158 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:20:10.279172 | orchestrator | 2025-08-29 15:20:10.279185 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-08-29 15:20:10.279199 | orchestrator | Friday 29 August 2025 15:17:50 +0000 (0:00:01.108) 0:02:43.255 ********* 2025-08-29 15:20:10.279218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.279242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.279257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.279308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.279327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.279343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.279367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279521 | orchestrator | 2025-08-29 15:20:10.279530 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-08-29 15:20:10.279544 | orchestrator | Friday 29 August 2025 15:17:56 +0000 (0:00:06.164) 0:02:49.420 ********* 2025-08-29 15:20:10.279559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:20:10.279574 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:20:10.279588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:20:10.279603 | orchestrator | 2025-08-29 15:20:10.279616 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-08-29 15:20:10.279628 | orchestrator | Friday 29 August 2025 15:17:58 +0000 (0:00:01.792) 0:02:51.213 ********* 2025-08-29 15:20:10.279651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.279665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.279697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.279713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.279728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.279744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.279769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.279944 | orchestrator | 2025-08-29 15:20:10.279953 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-08-29 15:20:10.279962 | orchestrator | Friday 29 August 2025 15:18:18 +0000 (0:00:20.161) 0:03:11.374 ********* 2025-08-29 15:20:10.279971 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.279980 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.279989 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.279998 | orchestrator | 2025-08-29 15:20:10.280006 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-08-29 15:20:10.280015 | orchestrator | Friday 29 August 2025 15:18:20 +0000 (0:00:01.636) 0:03:13.010 ********* 2025-08-29 15:20:10.280028 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280037 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280046 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280055 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280064 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280074 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280083 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280092 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280101 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280110 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280119 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280127 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280136 | orchestrator | 2025-08-29 15:20:10.280145 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-08-29 15:20:10.280154 | orchestrator | Friday 29 August 2025 15:18:26 +0000 (0:00:06.001) 0:03:19.012 ********* 2025-08-29 15:20:10.280163 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280172 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280180 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280189 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280198 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280207 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280215 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280224 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280233 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280242 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280251 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280260 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280269 | orchestrator | 2025-08-29 15:20:10.280278 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-08-29 15:20:10.280286 | orchestrator | Friday 29 August 2025 15:18:32 +0000 (0:00:05.958) 0:03:24.971 ********* 2025-08-29 15:20:10.280295 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280304 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280312 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:20:10.280332 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280341 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280350 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:20:10.280359 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280374 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280598 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:20:10.280662 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280673 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280682 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:20:10.280691 | orchestrator | 2025-08-29 15:20:10.280700 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-08-29 15:20:10.280710 | orchestrator | Friday 29 August 2025 15:18:37 +0000 (0:00:05.641) 0:03:30.612 ********* 2025-08-29 15:20:10.280720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.280740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.280749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:20:10.280767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.280788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.280797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:20:10.280806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:20:10.280902 | orchestrator | 2025-08-29 15:20:10.280923 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:20:10.280932 | orchestrator | Friday 29 August 2025 15:18:41 +0000 (0:00:04.058) 0:03:34.671 ********* 2025-08-29 15:20:10.280940 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:20:10.280948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:20:10.280956 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:20:10.280964 | orchestrator | 2025-08-29 15:20:10.280973 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-08-29 15:20:10.281015 | orchestrator | Friday 29 August 2025 15:18:42 +0000 (0:00:00.348) 0:03:35.020 ********* 2025-08-29 15:20:10.281030 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281039 | orchestrator | 2025-08-29 15:20:10.281047 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-08-29 15:20:10.281055 | orchestrator | Friday 29 August 2025 15:18:44 +0000 (0:00:02.103) 0:03:37.123 ********* 2025-08-29 15:20:10.281063 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281071 | orchestrator | 2025-08-29 15:20:10.281079 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-08-29 15:20:10.281087 | orchestrator | Friday 29 August 2025 15:18:47 +0000 (0:00:02.742) 0:03:39.865 ********* 2025-08-29 15:20:10.281095 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281103 | orchestrator | 2025-08-29 15:20:10.281111 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-08-29 15:20:10.281119 | orchestrator | Friday 29 August 2025 15:18:49 +0000 (0:00:02.268) 0:03:42.133 ********* 2025-08-29 15:20:10.281127 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281135 | orchestrator | 2025-08-29 15:20:10.281143 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-08-29 15:20:10.281151 | orchestrator | Friday 29 August 2025 15:18:51 +0000 (0:00:02.184) 0:03:44.318 ********* 2025-08-29 15:20:10.281159 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281167 | orchestrator | 2025-08-29 15:20:10.281175 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:20:10.281183 | orchestrator | Friday 29 August 2025 15:19:12 +0000 (0:00:20.416) 0:04:04.734 ********* 2025-08-29 15:20:10.281191 | orchestrator | 2025-08-29 15:20:10.281199 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:20:10.281207 | orchestrator | Friday 29 August 2025 15:19:12 +0000 (0:00:00.081) 0:04:04.816 ********* 2025-08-29 15:20:10.281215 | orchestrator | 2025-08-29 15:20:10.281223 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:20:10.281231 | orchestrator | Friday 29 August 2025 15:19:12 +0000 (0:00:00.073) 0:04:04.889 ********* 2025-08-29 15:20:10.281239 | orchestrator | 2025-08-29 15:20:10.281247 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-08-29 15:20:10.281261 | orchestrator | Friday 29 August 2025 15:19:12 +0000 (0:00:00.077) 0:04:04.967 ********* 2025-08-29 15:20:10.281270 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281278 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.281286 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.281295 | orchestrator | 2025-08-29 15:20:10.281303 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-08-29 15:20:10.281310 | orchestrator | Friday 29 August 2025 15:19:28 +0000 (0:00:16.474) 0:04:21.441 ********* 2025-08-29 15:20:10.281318 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281326 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.281334 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.281342 | orchestrator | 2025-08-29 15:20:10.281350 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-08-29 15:20:10.281358 | orchestrator | Friday 29 August 2025 15:19:40 +0000 (0:00:12.273) 0:04:33.715 ********* 2025-08-29 15:20:10.281366 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281374 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.281382 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.281409 | orchestrator | 2025-08-29 15:20:10.281418 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-08-29 15:20:10.281426 | orchestrator | Friday 29 August 2025 15:19:51 +0000 (0:00:10.564) 0:04:44.280 ********* 2025-08-29 15:20:10.281434 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281441 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.281449 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.281457 | orchestrator | 2025-08-29 15:20:10.281465 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-08-29 15:20:10.281472 | orchestrator | Friday 29 August 2025 15:20:01 +0000 (0:00:10.095) 0:04:54.375 ********* 2025-08-29 15:20:10.281489 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:20:10.281497 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:20:10.281505 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:20:10.281513 | orchestrator | 2025-08-29 15:20:10.281521 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:20:10.281530 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:20:10.281542 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:20:10.281551 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:20:10.281559 | orchestrator | 2025-08-29 15:20:10.281567 | orchestrator | 2025-08-29 15:20:10.281575 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:20:10.281583 | orchestrator | Friday 29 August 2025 15:20:08 +0000 (0:00:06.443) 0:05:00.818 ********* 2025-08-29 15:20:10.281591 | orchestrator | =============================================================================== 2025-08-29 15:20:10.281599 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.42s 2025-08-29 15:20:10.281607 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.16s 2025-08-29 15:20:10.281615 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.83s 2025-08-29 15:20:10.281623 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.47s 2025-08-29 15:20:10.281632 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.81s 2025-08-29 15:20:10.281640 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.27s 2025-08-29 15:20:10.281648 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.56s 2025-08-29 15:20:10.281656 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.10s 2025-08-29 15:20:10.281664 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.70s 2025-08-29 15:20:10.281672 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.52s 2025-08-29 15:20:10.281680 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.87s 2025-08-29 15:20:10.281688 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.49s 2025-08-29 15:20:10.281696 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.66s 2025-08-29 15:20:10.281704 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.44s 2025-08-29 15:20:10.281712 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.17s 2025-08-29 15:20:10.281720 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.00s 2025-08-29 15:20:10.281727 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.96s 2025-08-29 15:20:10.281735 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.71s 2025-08-29 15:20:10.281744 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.64s 2025-08-29 15:20:10.281752 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.31s 2025-08-29 15:20:13.320491 | orchestrator | 2025-08-29 15:20:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:16.365115 | orchestrator | 2025-08-29 15:20:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:19.415829 | orchestrator | 2025-08-29 15:20:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:22.460984 | orchestrator | 2025-08-29 15:20:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:25.510304 | orchestrator | 2025-08-29 15:20:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:28.558173 | orchestrator | 2025-08-29 15:20:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:31.602925 | orchestrator | 2025-08-29 15:20:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:34.644575 | orchestrator | 2025-08-29 15:20:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:37.696529 | orchestrator | 2025-08-29 15:20:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:40.749265 | orchestrator | 2025-08-29 15:20:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:43.789842 | orchestrator | 2025-08-29 15:20:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:46.831601 | orchestrator | 2025-08-29 15:20:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:49.879830 | orchestrator | 2025-08-29 15:20:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:52.929376 | orchestrator | 2025-08-29 15:20:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:55.976942 | orchestrator | 2025-08-29 15:20:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:59.020390 | orchestrator | 2025-08-29 15:20:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:21:02.059267 | orchestrator | 2025-08-29 15:21:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:21:05.097634 | orchestrator | 2025-08-29 15:21:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:21:08.136976 | orchestrator | 2025-08-29 15:21:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:21:11.175355 | orchestrator | 2025-08-29 15:21:11.500389 | orchestrator | 2025-08-29 15:21:11.506076 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 15:21:11 UTC 2025 2025-08-29 15:21:11.506152 | orchestrator | 2025-08-29 15:21:11.971112 | orchestrator | ok: Runtime: 0:38:40.029368 2025-08-29 15:21:12.242001 | 2025-08-29 15:21:12.242154 | TASK [Bootstrap services] 2025-08-29 15:21:13.022336 | orchestrator | 2025-08-29 15:21:13.022584 | orchestrator | # BOOTSTRAP 2025-08-29 15:21:13.022609 | orchestrator | 2025-08-29 15:21:13.022623 | orchestrator | + set -e 2025-08-29 15:21:13.022637 | orchestrator | + echo 2025-08-29 15:21:13.022650 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 15:21:13.022668 | orchestrator | + echo 2025-08-29 15:21:13.022713 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 15:21:13.032605 | orchestrator | + set -e 2025-08-29 15:21:13.032677 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 15:21:17.366797 | orchestrator | 2025-08-29 15:21:17 | INFO  | It takes a moment until task ad802060-3448-43a8-b436-f14a191fe2bf (flavor-manager) has been started and output is visible here. 2025-08-29 15:21:24.994705 | orchestrator | 2025-08-29 15:21:21 | INFO  | Flavor SCS-1V-4 created 2025-08-29 15:21:24.994781 | orchestrator | 2025-08-29 15:21:21 | INFO  | Flavor SCS-2V-8 created 2025-08-29 15:21:24.994789 | orchestrator | 2025-08-29 15:21:21 | INFO  | Flavor SCS-4V-16 created 2025-08-29 15:21:24.994794 | orchestrator | 2025-08-29 15:21:21 | INFO  | Flavor SCS-8V-32 created 2025-08-29 15:21:24.994798 | orchestrator | 2025-08-29 15:21:21 | INFO  | Flavor SCS-1V-2 created 2025-08-29 15:21:24.994802 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-2V-4 created 2025-08-29 15:21:24.994806 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-4V-8 created 2025-08-29 15:21:24.994810 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-8V-16 created 2025-08-29 15:21:24.994817 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-16V-32 created 2025-08-29 15:21:24.994821 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-1V-8 created 2025-08-29 15:21:24.994825 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-2V-16 created 2025-08-29 15:21:24.994829 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-4V-32 created 2025-08-29 15:21:24.994833 | orchestrator | 2025-08-29 15:21:22 | INFO  | Flavor SCS-1L-1 created 2025-08-29 15:21:24.994837 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-2V-4-20s created 2025-08-29 15:21:24.994841 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-4V-16-100s created 2025-08-29 15:21:24.994844 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-1V-4-10 created 2025-08-29 15:21:24.994848 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-2V-8-20 created 2025-08-29 15:21:24.994852 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-4V-16-50 created 2025-08-29 15:21:24.994856 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-8V-32-100 created 2025-08-29 15:21:24.994860 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-1V-2-5 created 2025-08-29 15:21:24.994864 | orchestrator | 2025-08-29 15:21:23 | INFO  | Flavor SCS-2V-4-10 created 2025-08-29 15:21:24.994868 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-4V-8-20 created 2025-08-29 15:21:24.994872 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-8V-16-50 created 2025-08-29 15:21:24.994875 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-16V-32-100 created 2025-08-29 15:21:24.994879 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-1V-8-20 created 2025-08-29 15:21:24.994883 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-2V-16-50 created 2025-08-29 15:21:24.994887 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-4V-32-100 created 2025-08-29 15:21:24.994891 | orchestrator | 2025-08-29 15:21:24 | INFO  | Flavor SCS-1L-1-5 created 2025-08-29 15:21:27.201883 | orchestrator | 2025-08-29 15:21:27 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-08-29 15:21:37.304080 | orchestrator | 2025-08-29 15:21:37 | INFO  | Task a341692a-5d89-45ec-bdf6-8cbe25c23cbe (bootstrap-basic) was prepared for execution. 2025-08-29 15:21:37.304216 | orchestrator | 2025-08-29 15:21:37 | INFO  | It takes a moment until task a341692a-5d89-45ec-bdf6-8cbe25c23cbe (bootstrap-basic) has been started and output is visible here. 2025-08-29 15:22:38.817171 | orchestrator | 2025-08-29 15:22:38.817276 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-08-29 15:22:38.817294 | orchestrator | 2025-08-29 15:22:38.817306 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 15:22:38.817318 | orchestrator | Friday 29 August 2025 15:21:41 +0000 (0:00:00.082) 0:00:00.083 ********* 2025-08-29 15:22:38.817329 | orchestrator | ok: [localhost] 2025-08-29 15:22:38.817341 | orchestrator | 2025-08-29 15:22:38.817352 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-08-29 15:22:38.817365 | orchestrator | Friday 29 August 2025 15:21:43 +0000 (0:00:01.968) 0:00:02.051 ********* 2025-08-29 15:22:38.817377 | orchestrator | ok: [localhost] 2025-08-29 15:22:38.817388 | orchestrator | 2025-08-29 15:22:38.817399 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-08-29 15:22:38.817410 | orchestrator | Friday 29 August 2025 15:21:52 +0000 (0:00:08.566) 0:00:10.618 ********* 2025-08-29 15:22:38.817421 | orchestrator | changed: [localhost] 2025-08-29 15:22:38.817432 | orchestrator | 2025-08-29 15:22:38.817443 | orchestrator | TASK [Get volume type local] *************************************************** 2025-08-29 15:22:38.817454 | orchestrator | Friday 29 August 2025 15:22:00 +0000 (0:00:07.804) 0:00:18.422 ********* 2025-08-29 15:22:38.817465 | orchestrator | ok: [localhost] 2025-08-29 15:22:38.817477 | orchestrator | 2025-08-29 15:22:38.817488 | orchestrator | TASK [Create volume type local] ************************************************ 2025-08-29 15:22:38.817499 | orchestrator | Friday 29 August 2025 15:22:07 +0000 (0:00:07.595) 0:00:26.017 ********* 2025-08-29 15:22:38.817510 | orchestrator | changed: [localhost] 2025-08-29 15:22:38.817525 | orchestrator | 2025-08-29 15:22:38.817536 | orchestrator | TASK [Create public network] *************************************************** 2025-08-29 15:22:38.817547 | orchestrator | Friday 29 August 2025 15:22:14 +0000 (0:00:06.623) 0:00:32.640 ********* 2025-08-29 15:22:38.817558 | orchestrator | changed: [localhost] 2025-08-29 15:22:38.817569 | orchestrator | 2025-08-29 15:22:38.817580 | orchestrator | TASK [Set public network to default] ******************************************* 2025-08-29 15:22:38.817591 | orchestrator | Friday 29 August 2025 15:22:19 +0000 (0:00:05.428) 0:00:38.069 ********* 2025-08-29 15:22:38.817602 | orchestrator | changed: [localhost] 2025-08-29 15:22:38.817612 | orchestrator | 2025-08-29 15:22:38.817632 | orchestrator | TASK [Create public subnet] **************************************************** 2025-08-29 15:22:38.817644 | orchestrator | Friday 29 August 2025 15:22:26 +0000 (0:00:06.553) 0:00:44.623 ********* 2025-08-29 15:22:38.817694 | orchestrator | changed: [localhost] 2025-08-29 15:22:38.817709 | orchestrator | 2025-08-29 15:22:38.817720 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-08-29 15:22:38.817732 | orchestrator | Friday 29 August 2025 15:22:30 +0000 (0:00:04.457) 0:00:49.081 ********* 2025-08-29 15:22:38.817744 | orchestrator | changed: [localhost] 2025-08-29 15:22:38.817756 | orchestrator | 2025-08-29 15:22:38.817767 | orchestrator | TASK [Create manager role] ***************************************************** 2025-08-29 15:22:38.817780 | orchestrator | Friday 29 August 2025 15:22:34 +0000 (0:00:04.113) 0:00:53.194 ********* 2025-08-29 15:22:38.817793 | orchestrator | ok: [localhost] 2025-08-29 15:22:38.817805 | orchestrator | 2025-08-29 15:22:38.817817 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:22:38.817829 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:22:38.817843 | orchestrator | 2025-08-29 15:22:38.817855 | orchestrator | 2025-08-29 15:22:38.817870 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:22:38.817889 | orchestrator | Friday 29 August 2025 15:22:38 +0000 (0:00:03.637) 0:00:56.832 ********* 2025-08-29 15:22:38.817938 | orchestrator | =============================================================================== 2025-08-29 15:22:38.817961 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.57s 2025-08-29 15:22:38.817980 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.80s 2025-08-29 15:22:38.818000 | orchestrator | Get volume type local --------------------------------------------------- 7.60s 2025-08-29 15:22:38.818012 | orchestrator | Create volume type local ------------------------------------------------ 6.62s 2025-08-29 15:22:38.818066 | orchestrator | Set public network to default ------------------------------------------- 6.55s 2025-08-29 15:22:38.818079 | orchestrator | Create public network --------------------------------------------------- 5.43s 2025-08-29 15:22:38.818092 | orchestrator | Create public subnet ---------------------------------------------------- 4.46s 2025-08-29 15:22:38.818104 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.11s 2025-08-29 15:22:38.818115 | orchestrator | Create manager role ----------------------------------------------------- 3.64s 2025-08-29 15:22:38.818126 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2025-08-29 15:22:41.272760 | orchestrator | 2025-08-29 15:22:41 | INFO  | It takes a moment until task 460d9005-f3f0-4309-8740-ef981a9d8e1b (image-manager) has been started and output is visible here. 2025-08-29 15:23:21.539603 | orchestrator | 2025-08-29 15:22:44 | INFO  | Processing image 'Cirros 0.6.2' 2025-08-29 15:23:21.539826 | orchestrator | 2025-08-29 15:22:44 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-08-29 15:23:21.539850 | orchestrator | 2025-08-29 15:22:44 | INFO  | Importing image Cirros 0.6.2 2025-08-29 15:23:21.539863 | orchestrator | 2025-08-29 15:22:44 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 15:23:21.539875 | orchestrator | 2025-08-29 15:22:46 | INFO  | Waiting for image to leave queued state... 2025-08-29 15:23:21.539887 | orchestrator | 2025-08-29 15:22:48 | INFO  | Waiting for import to complete... 2025-08-29 15:23:21.539898 | orchestrator | 2025-08-29 15:22:58 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-08-29 15:23:21.539909 | orchestrator | 2025-08-29 15:22:59 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-08-29 15:23:21.539920 | orchestrator | 2025-08-29 15:22:59 | INFO  | Setting internal_version = 0.6.2 2025-08-29 15:23:21.539931 | orchestrator | 2025-08-29 15:22:59 | INFO  | Setting image_original_user = cirros 2025-08-29 15:23:21.539942 | orchestrator | 2025-08-29 15:22:59 | INFO  | Adding tag os:cirros 2025-08-29 15:23:21.539953 | orchestrator | 2025-08-29 15:22:59 | INFO  | Setting property architecture: x86_64 2025-08-29 15:23:21.539964 | orchestrator | 2025-08-29 15:22:59 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 15:23:21.539975 | orchestrator | 2025-08-29 15:23:00 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 15:23:21.539986 | orchestrator | 2025-08-29 15:23:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 15:23:21.539997 | orchestrator | 2025-08-29 15:23:00 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 15:23:21.540008 | orchestrator | 2025-08-29 15:23:00 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 15:23:21.540019 | orchestrator | 2025-08-29 15:23:00 | INFO  | Setting property os_distro: cirros 2025-08-29 15:23:21.540029 | orchestrator | 2025-08-29 15:23:01 | INFO  | Setting property replace_frequency: never 2025-08-29 15:23:21.540040 | orchestrator | 2025-08-29 15:23:01 | INFO  | Setting property uuid_validity: none 2025-08-29 15:23:21.540051 | orchestrator | 2025-08-29 15:23:01 | INFO  | Setting property provided_until: none 2025-08-29 15:23:21.540083 | orchestrator | 2025-08-29 15:23:01 | INFO  | Setting property image_description: Cirros 2025-08-29 15:23:21.540103 | orchestrator | 2025-08-29 15:23:01 | INFO  | Setting property image_name: Cirros 2025-08-29 15:23:21.540116 | orchestrator | 2025-08-29 15:23:02 | INFO  | Setting property internal_version: 0.6.2 2025-08-29 15:23:21.540133 | orchestrator | 2025-08-29 15:23:02 | INFO  | Setting property image_original_user: cirros 2025-08-29 15:23:21.540146 | orchestrator | 2025-08-29 15:23:02 | INFO  | Setting property os_version: 0.6.2 2025-08-29 15:23:21.540159 | orchestrator | 2025-08-29 15:23:02 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 15:23:21.540172 | orchestrator | 2025-08-29 15:23:02 | INFO  | Setting property image_build_date: 2023-05-30 2025-08-29 15:23:21.540188 | orchestrator | 2025-08-29 15:23:03 | INFO  | Checking status of 'Cirros 0.6.2' 2025-08-29 15:23:21.540207 | orchestrator | 2025-08-29 15:23:03 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-08-29 15:23:21.540226 | orchestrator | 2025-08-29 15:23:03 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-08-29 15:23:21.540244 | orchestrator | 2025-08-29 15:23:03 | INFO  | Processing image 'Cirros 0.6.3' 2025-08-29 15:23:21.540262 | orchestrator | 2025-08-29 15:23:03 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-08-29 15:23:21.540280 | orchestrator | 2025-08-29 15:23:03 | INFO  | Importing image Cirros 0.6.3 2025-08-29 15:23:21.540298 | orchestrator | 2025-08-29 15:23:03 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 15:23:21.540350 | orchestrator | 2025-08-29 15:23:04 | INFO  | Waiting for image to leave queued state... 2025-08-29 15:23:21.540371 | orchestrator | 2025-08-29 15:23:06 | INFO  | Waiting for import to complete... 2025-08-29 15:23:21.540391 | orchestrator | 2025-08-29 15:23:16 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-08-29 15:23:21.540434 | orchestrator | 2025-08-29 15:23:17 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-08-29 15:23:21.540455 | orchestrator | 2025-08-29 15:23:17 | INFO  | Setting internal_version = 0.6.3 2025-08-29 15:23:21.540467 | orchestrator | 2025-08-29 15:23:17 | INFO  | Setting image_original_user = cirros 2025-08-29 15:23:21.540478 | orchestrator | 2025-08-29 15:23:17 | INFO  | Adding tag os:cirros 2025-08-29 15:23:21.540488 | orchestrator | 2025-08-29 15:23:17 | INFO  | Setting property architecture: x86_64 2025-08-29 15:23:21.540499 | orchestrator | 2025-08-29 15:23:17 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 15:23:21.540509 | orchestrator | 2025-08-29 15:23:17 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 15:23:21.540520 | orchestrator | 2025-08-29 15:23:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 15:23:21.540531 | orchestrator | 2025-08-29 15:23:18 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 15:23:21.540541 | orchestrator | 2025-08-29 15:23:18 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 15:23:21.540552 | orchestrator | 2025-08-29 15:23:18 | INFO  | Setting property os_distro: cirros 2025-08-29 15:23:21.540562 | orchestrator | 2025-08-29 15:23:18 | INFO  | Setting property replace_frequency: never 2025-08-29 15:23:21.540573 | orchestrator | 2025-08-29 15:23:18 | INFO  | Setting property uuid_validity: none 2025-08-29 15:23:21.540597 | orchestrator | 2025-08-29 15:23:19 | INFO  | Setting property provided_until: none 2025-08-29 15:23:21.540608 | orchestrator | 2025-08-29 15:23:19 | INFO  | Setting property image_description: Cirros 2025-08-29 15:23:21.540618 | orchestrator | 2025-08-29 15:23:19 | INFO  | Setting property image_name: Cirros 2025-08-29 15:23:21.540629 | orchestrator | 2025-08-29 15:23:19 | INFO  | Setting property internal_version: 0.6.3 2025-08-29 15:23:21.540639 | orchestrator | 2025-08-29 15:23:19 | INFO  | Setting property image_original_user: cirros 2025-08-29 15:23:21.540650 | orchestrator | 2025-08-29 15:23:20 | INFO  | Setting property os_version: 0.6.3 2025-08-29 15:23:21.540661 | orchestrator | 2025-08-29 15:23:20 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 15:23:21.540671 | orchestrator | 2025-08-29 15:23:20 | INFO  | Setting property image_build_date: 2024-09-26 2025-08-29 15:23:21.540682 | orchestrator | 2025-08-29 15:23:20 | INFO  | Checking status of 'Cirros 0.6.3' 2025-08-29 15:23:21.540692 | orchestrator | 2025-08-29 15:23:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-08-29 15:23:21.540710 | orchestrator | 2025-08-29 15:23:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-08-29 15:23:21.854215 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-08-29 15:23:23.910218 | orchestrator | 2025-08-29 15:23:23 | INFO  | date: 2025-08-29 2025-08-29 15:23:23.910319 | orchestrator | 2025-08-29 15:23:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:23:23.910344 | orchestrator | 2025-08-29 15:23:23 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:23:23.910390 | orchestrator | 2025-08-29 15:23:23 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2.CHECKSUM 2025-08-29 15:23:23.931416 | orchestrator | 2025-08-29 15:23:23 | INFO  | checksum: 9bd11944634778935b43eb626302bc74d657e4c319fdb6fd625fdfeb36ffc69d 2025-08-29 15:23:24.009867 | orchestrator | 2025-08-29 15:23:24 | INFO  | It takes a moment until task e06c1f00-0969-4c47-8349-a336c21baff5 (image-manager) has been started and output is visible here. 2025-08-29 15:24:24.377035 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-29 15:24:24.377124 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-08-29 15:24:24.377132 | orchestrator | 2025-08-29 15:23:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:24:24.377141 | orchestrator | 2025-08-29 15:23:26 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2: 200 2025-08-29 15:24:24.377148 | orchestrator | 2025-08-29 15:23:26 | INFO  | Importing image OpenStack Octavia Amphora 2025-08-29 2025-08-29 15:24:24.377153 | orchestrator | 2025-08-29 15:23:26 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:24:24.377158 | orchestrator | 2025-08-29 15:23:27 | INFO  | Waiting for image to leave queued state... 2025-08-29 15:24:24.377177 | orchestrator | 2025-08-29 15:23:29 | INFO  | Waiting for import to complete... 2025-08-29 15:24:24.377182 | orchestrator | 2025-08-29 15:23:39 | INFO  | Waiting for import to complete... 2025-08-29 15:24:24.377186 | orchestrator | 2025-08-29 15:23:49 | INFO  | Waiting for import to complete... 2025-08-29 15:24:24.377190 | orchestrator | 2025-08-29 15:23:59 | INFO  | Waiting for import to complete... 2025-08-29 15:24:24.377194 | orchestrator | 2025-08-29 15:24:09 | INFO  | Waiting for import to complete... 2025-08-29 15:24:24.377198 | orchestrator | 2025-08-29 15:24:20 | INFO  | Import of 'OpenStack Octavia Amphora 2025-08-29' successfully completed, reloading images 2025-08-29 15:24:24.377203 | orchestrator | 2025-08-29 15:24:20 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:24:24.377207 | orchestrator | 2025-08-29 15:24:20 | INFO  | Setting internal_version = 2025-08-29 2025-08-29 15:24:24.377211 | orchestrator | 2025-08-29 15:24:20 | INFO  | Setting image_original_user = ubuntu 2025-08-29 15:24:24.377215 | orchestrator | 2025-08-29 15:24:20 | INFO  | Adding tag amphora 2025-08-29 15:24:24.377219 | orchestrator | 2025-08-29 15:24:20 | INFO  | Adding tag os:ubuntu 2025-08-29 15:24:24.377223 | orchestrator | 2025-08-29 15:24:20 | INFO  | Setting property architecture: x86_64 2025-08-29 15:24:24.377226 | orchestrator | 2025-08-29 15:24:21 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 15:24:24.377230 | orchestrator | 2025-08-29 15:24:21 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 15:24:24.377239 | orchestrator | 2025-08-29 15:24:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 15:24:24.377243 | orchestrator | 2025-08-29 15:24:21 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 15:24:24.377247 | orchestrator | 2025-08-29 15:24:21 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 15:24:24.377251 | orchestrator | 2025-08-29 15:24:21 | INFO  | Setting property os_distro: ubuntu 2025-08-29 15:24:24.377255 | orchestrator | 2025-08-29 15:24:22 | INFO  | Setting property replace_frequency: quarterly 2025-08-29 15:24:24.377258 | orchestrator | 2025-08-29 15:24:22 | INFO  | Setting property uuid_validity: last-1 2025-08-29 15:24:24.377262 | orchestrator | 2025-08-29 15:24:22 | INFO  | Setting property provided_until: none 2025-08-29 15:24:24.377266 | orchestrator | 2025-08-29 15:24:22 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-08-29 15:24:24.377270 | orchestrator | 2025-08-29 15:24:22 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-08-29 15:24:24.377274 | orchestrator | 2025-08-29 15:24:23 | INFO  | Setting property internal_version: 2025-08-29 2025-08-29 15:24:24.377277 | orchestrator | 2025-08-29 15:24:23 | INFO  | Setting property image_original_user: ubuntu 2025-08-29 15:24:24.377281 | orchestrator | 2025-08-29 15:24:23 | INFO  | Setting property os_version: 2025-08-29 2025-08-29 15:24:24.377285 | orchestrator | 2025-08-29 15:24:23 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:24:24.377298 | orchestrator | 2025-08-29 15:24:23 | INFO  | Setting property image_build_date: 2025-08-29 2025-08-29 15:24:24.377302 | orchestrator | 2025-08-29 15:24:24 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:24:24.377306 | orchestrator | 2025-08-29 15:24:24 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:24:24.377313 | orchestrator | 2025-08-29 15:24:24 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-08-29 15:24:24.377317 | orchestrator | 2025-08-29 15:24:24 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-08-29 15:24:24.377325 | orchestrator | 2025-08-29 15:24:24 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-08-29 15:24:24.377330 | orchestrator | 2025-08-29 15:24:24 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-08-29 15:24:24.972401 | orchestrator | ok: Runtime: 0:03:12.141501 2025-08-29 15:24:24.997231 | 2025-08-29 15:24:24.997392 | TASK [Run checks] 2025-08-29 15:24:25.708289 | orchestrator | + set -e 2025-08-29 15:24:25.708491 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 15:24:25.708516 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 15:24:25.708536 | orchestrator | ++ INTERACTIVE=false 2025-08-29 15:24:25.708550 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 15:24:25.708562 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 15:24:25.708577 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 15:24:25.709074 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 15:24:25.714748 | orchestrator | 2025-08-29 15:24:25.714852 | orchestrator | # CHECK 2025-08-29 15:24:25.714869 | orchestrator | 2025-08-29 15:24:25.714882 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:24:25.714939 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:24:25.714952 | orchestrator | + echo 2025-08-29 15:24:25.714964 | orchestrator | + echo '# CHECK' 2025-08-29 15:24:25.714975 | orchestrator | + echo 2025-08-29 15:24:25.715002 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:24:25.715640 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:24:25.780364 | orchestrator | 2025-08-29 15:24:25.780471 | orchestrator | ## Containers @ testbed-manager 2025-08-29 15:24:25.780492 | orchestrator | 2025-08-29 15:24:25.780512 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:24:25.780529 | orchestrator | + echo 2025-08-29 15:24:25.780547 | orchestrator | + echo '## Containers @ testbed-manager' 2025-08-29 15:24:25.780564 | orchestrator | + echo 2025-08-29 15:24:25.780582 | orchestrator | + osism container testbed-manager ps 2025-08-29 15:24:28.043191 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:24:28.043300 | orchestrator | 7bac054a5005 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2025-08-29 15:24:28.043315 | orchestrator | 3e8296b5ed3a registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2025-08-29 15:24:28.043321 | orchestrator | f13bdb38e30f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-08-29 15:24:28.043333 | orchestrator | c8fa8817392e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-08-29 15:24:28.043339 | orchestrator | 537dc0f08dfa registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2025-08-29 15:24:28.043345 | orchestrator | b95fedba0561 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 20 minutes ago Up 20 minutes cephclient 2025-08-29 15:24:28.043355 | orchestrator | 4875b1539fbc registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2025-08-29 15:24:28.043361 | orchestrator | 16dd608a25e6 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2025-08-29 15:24:28.043385 | orchestrator | 0e18a83cff58 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-08-29 15:24:28.043392 | orchestrator | b9a10b967ca1 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 35 minutes ago Up 35 minutes (healthy) 80/tcp phpmyadmin 2025-08-29 15:24:28.043397 | orchestrator | db2722cd642a registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 36 minutes ago Up 36 minutes openstackclient 2025-08-29 15:24:28.043403 | orchestrator | a4238ad2332d registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 36 minutes ago Up 36 minutes (healthy) 8080/tcp homer 2025-08-29 15:24:28.043409 | orchestrator | 228b67ce50cb registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-08-29 15:24:28.043419 | orchestrator | c60fcdb41d29 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" About an hour ago Up 42 minutes (healthy) manager-inventory_reconciler-1 2025-08-29 15:24:28.043442 | orchestrator | ba201af5c29c registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) osism-ansible 2025-08-29 15:24:28.043448 | orchestrator | d0ec1de9bb4c registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) osism-kubernetes 2025-08-29 15:24:28.043454 | orchestrator | 8266a7442f24 registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) ceph-ansible 2025-08-29 15:24:28.043460 | orchestrator | d41c20abf581 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) kolla-ansible 2025-08-29 15:24:28.043466 | orchestrator | b52378f86511 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up 43 minutes (healthy) 8000/tcp manager-ara-server-1 2025-08-29 15:24:28.043472 | orchestrator | a6d13738cbb9 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 43 minutes (healthy) manager-beat-1 2025-08-29 15:24:28.043478 | orchestrator | 753f8da98192 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" About an hour ago Up 43 minutes (healthy) osismclient 2025-08-29 15:24:28.043484 | orchestrator | c5de7920e2c6 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 43 minutes (healthy) manager-listener-1 2025-08-29 15:24:28.043495 | orchestrator | f29f0eaded1f registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 43 minutes (healthy) manager-flower-1 2025-08-29 15:24:28.043501 | orchestrator | fc3650ccaa44 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 43 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-08-29 15:24:28.043507 | orchestrator | ec40a7c063af registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 43 minutes (healthy) 6379/tcp manager-redis-1 2025-08-29 15:24:28.043513 | orchestrator | 45b88fd7fee6 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" About an hour ago Up 43 minutes (healthy) 3306/tcp manager-mariadb-1 2025-08-29 15:24:28.043519 | orchestrator | 4530398ce567 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 43 minutes (healthy) manager-openstack-1 2025-08-29 15:24:28.043525 | orchestrator | e68dd300170d registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-08-29 15:24:28.401542 | orchestrator | 2025-08-29 15:24:28.401641 | orchestrator | ## Images @ testbed-manager 2025-08-29 15:24:28.401654 | orchestrator | 2025-08-29 15:24:28.401665 | orchestrator | + echo 2025-08-29 15:24:28.401676 | orchestrator | + echo '## Images @ testbed-manager' 2025-08-29 15:24:28.401687 | orchestrator | + echo 2025-08-29 15:24:28.401697 | orchestrator | + osism container testbed-manager images 2025-08-29 15:24:30.480723 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:24:30.480804 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e303c4555969 8 hours ago 237MB 2025-08-29 15:24:30.480822 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 3 weeks ago 11.5MB 2025-08-29 15:24:30.480828 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 6 weeks ago 571MB 2025-08-29 15:24:30.480833 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:24:30.480849 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:24:30.480854 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:24:30.480860 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 6 weeks ago 891MB 2025-08-29 15:24:30.480864 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 6 weeks ago 360MB 2025-08-29 15:24:30.480869 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:24:30.480874 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 6 weeks ago 456MB 2025-08-29 15:24:30.480879 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:24:30.480883 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 6 weeks ago 575MB 2025-08-29 15:24:30.480920 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 6 weeks ago 535MB 2025-08-29 15:24:30.480930 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 6 weeks ago 308MB 2025-08-29 15:24:30.480938 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 6 weeks ago 1.21GB 2025-08-29 15:24:30.480946 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 7 weeks ago 310MB 2025-08-29 15:24:30.480954 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 7 weeks ago 41.4MB 2025-08-29 15:24:30.480962 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-08-29 15:24:30.480971 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 2 months ago 329MB 2025-08-29 15:24:30.480975 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 months ago 453MB 2025-08-29 15:24:30.480980 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-08-29 15:24:30.480984 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 11 months ago 300MB 2025-08-29 15:24:30.480989 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 14 months ago 146MB 2025-08-29 15:24:30.690459 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:24:30.690710 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:24:30.732598 | orchestrator | 2025-08-29 15:24:30.732683 | orchestrator | ## Containers @ testbed-node-0 2025-08-29 15:24:30.732697 | orchestrator | 2025-08-29 15:24:30.732709 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:24:30.732720 | orchestrator | + echo 2025-08-29 15:24:30.732731 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-08-29 15:24:30.732742 | orchestrator | + echo 2025-08-29 15:24:30.732752 | orchestrator | + osism container testbed-node-0 ps 2025-08-29 15:24:32.961166 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:24:32.961279 | orchestrator | d62c4c7a97c6 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 15:24:32.961296 | orchestrator | b5f6db2ca111 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 15:24:32.961308 | orchestrator | 26a8488fc2ba registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 15:24:32.961319 | orchestrator | dcacb006cedf registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-08-29 15:24:32.961330 | orchestrator | ee3663b9774d registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 15:24:32.961341 | orchestrator | 16adf515692a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-08-29 15:24:32.961352 | orchestrator | de0adbaf459f registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-08-29 15:24:32.961364 | orchestrator | 13293b836ce1 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-08-29 15:24:32.961396 | orchestrator | 41ae3f11d5bf registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-08-29 15:24:32.961408 | orchestrator | e1d173f91e84 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-08-29 15:24:32.961419 | orchestrator | 455a67918952 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-08-29 15:24:32.961429 | orchestrator | 2f4abd7c615f registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-08-29 15:24:32.961440 | orchestrator | 0edc7fd9df39 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2025-08-29 15:24:32.961451 | orchestrator | df3dfb7f7204 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-08-29 15:24:32.961462 | orchestrator | bc67acd442dd registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-08-29 15:24:32.961472 | orchestrator | 2c9474fa6fab registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-08-29 15:24:32.961483 | orchestrator | 0924bb60890f registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-08-29 15:24:32.961494 | orchestrator | 3254f758d2f0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-08-29 15:24:32.961504 | orchestrator | 7ce9e1a9fdca registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-08-29 15:24:32.961536 | orchestrator | 37300da8d508 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-08-29 15:24:32.961548 | orchestrator | c231913e49d1 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-08-29 15:24:32.961558 | orchestrator | 6f900aaebbd8 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-08-29 15:24:32.961569 | orchestrator | 5f9ac36d150b registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-08-29 15:24:32.961580 | orchestrator | 21954ed47479 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-08-29 15:24:32.961606 | orchestrator | 8d3923d962c2 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2025-08-29 15:24:32.961617 | orchestrator | 33ca32379fa8 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-08-29 15:24:32.961640 | orchestrator | fc93664dea8c registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-08-29 15:24:32.961651 | orchestrator | 386c042cfd23 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-08-29 15:24:32.961662 | orchestrator | 93ff6145cc18 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_scheduler 2025-08-29 15:24:32.961672 | orchestrator | 1dd71c9a1521 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-08-29 15:24:32.961688 | orchestrator | fc16b233d818 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_api 2025-08-29 15:24:32.961700 | orchestrator | b4388bb5d227 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-0 2025-08-29 15:24:32.961710 | orchestrator | 862f49ca78f2 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-08-29 15:24:32.961721 | orchestrator | c066af251d46 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2025-08-29 15:24:32.961732 | orchestrator | 1667f88d25c7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2025-08-29 15:24:32.961742 | orchestrator | bcab3e203ad4 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) horizon 2025-08-29 15:24:32.961753 | orchestrator | 2520f9ca6982 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-08-29 15:24:32.961768 | orchestrator | f1b268f3bb15 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2025-08-29 15:24:32.961779 | orchestrator | 9cc6d17aa18d registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) opensearch 2025-08-29 15:24:32.961790 | orchestrator | c60e0cb0025c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-0 2025-08-29 15:24:32.961810 | orchestrator | cef39376bf18 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-08-29 15:24:32.961821 | orchestrator | c993969c8834 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2025-08-29 15:24:32.961832 | orchestrator | cc48631e493d registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) haproxy 2025-08-29 15:24:32.961842 | orchestrator | b48891ab9153 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2025-08-29 15:24:32.961859 | orchestrator | a8bda4de5deb registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2025-08-29 15:24:32.961870 | orchestrator | 280c4400faab registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes ovn_nb_db 2025-08-29 15:24:32.961881 | orchestrator | bdd79b4e6c8a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 31 minutes ago Up 31 minutes ceph-mon-testbed-node-0 2025-08-29 15:24:32.961892 | orchestrator | 48328a79d1be registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes ovn_controller 2025-08-29 15:24:32.961903 | orchestrator | f844574e62e8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) rabbitmq 2025-08-29 15:24:32.961938 | orchestrator | 7ecf23d243d3 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_vswitchd 2025-08-29 15:24:32.961949 | orchestrator | da75d8fa2952 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2025-08-29 15:24:32.961960 | orchestrator | b9f27fa4659e registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) redis_sentinel 2025-08-29 15:24:32.961971 | orchestrator | af01ea0f8c98 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) redis 2025-08-29 15:24:32.961982 | orchestrator | c1eddee9ac37 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) memcached 2025-08-29 15:24:32.961992 | orchestrator | d669943e035b registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2025-08-29 15:24:32.962003 | orchestrator | 18bb1d6fa535 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 35 minutes ago Up 35 minutes kolla_toolbox 2025-08-29 15:24:32.962062 | orchestrator | c210e7ccb997 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-08-29 15:24:33.281388 | orchestrator | 2025-08-29 15:24:33.281491 | orchestrator | ## Images @ testbed-node-0 2025-08-29 15:24:33.281506 | orchestrator | 2025-08-29 15:24:33.281516 | orchestrator | + echo 2025-08-29 15:24:33.281527 | orchestrator | + echo '## Images @ testbed-node-0' 2025-08-29 15:24:33.281538 | orchestrator | + echo 2025-08-29 15:24:33.281547 | orchestrator | + osism container testbed-node-0 images 2025-08-29 15:24:35.625107 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:24:35.625217 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:24:35.625231 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 15:24:35.625241 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 15:24:35.625251 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 15:24:35.625261 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 15:24:35.625302 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 15:24:35.625313 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 15:24:35.625323 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:24:35.625333 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 15:24:35.625343 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 15:24:35.625369 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:24:35.625380 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 15:24:35.625389 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 15:24:35.625400 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 15:24:35.625411 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 15:24:35.625420 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:24:35.625430 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 15:24:35.625440 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:24:35.625450 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 15:24:35.625459 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 15:24:35.625469 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 15:24:35.625478 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 15:24:35.625488 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 15:24:35.625498 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 15:24:35.625507 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 15:24:35.625517 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 15:24:35.625527 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 6 weeks ago 1.04GB 2025-08-29 15:24:35.625536 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 6 weeks ago 1.04GB 2025-08-29 15:24:35.625546 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 15:24:35.625556 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 15:24:35.625566 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 15:24:35.625598 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 15:24:35.625608 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 15:24:35.625618 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 15:24:35.625628 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 15:24:35.625643 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 15:24:35.625655 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 15:24:35.625666 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 15:24:35.625678 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 15:24:35.625689 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 15:24:35.625701 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 15:24:35.625712 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 15:24:35.625723 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 15:24:35.625735 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 15:24:35.625746 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 15:24:35.625757 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 15:24:35.625769 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 15:24:35.625780 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 15:24:35.625790 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 15:24:35.625801 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 15:24:35.625812 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 15:24:35.625824 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 15:24:35.625835 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 6 weeks ago 1.11GB 2025-08-29 15:24:35.625846 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 6 weeks ago 1.11GB 2025-08-29 15:24:35.625857 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 15:24:35.625869 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 15:24:35.625880 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 15:24:35.625891 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 15:24:35.625909 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 6 weeks ago 1.04GB 2025-08-29 15:24:35.625938 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 6 weeks ago 1.04GB 2025-08-29 15:24:35.625949 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 6 weeks ago 1.04GB 2025-08-29 15:24:35.625961 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 6 weeks ago 1.04GB 2025-08-29 15:24:35.625972 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 15:24:36.001429 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:24:36.002351 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:24:36.075889 | orchestrator | 2025-08-29 15:24:36.076030 | orchestrator | ## Containers @ testbed-node-1 2025-08-29 15:24:36.076046 | orchestrator | 2025-08-29 15:24:36.076058 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:24:36.076069 | orchestrator | + echo 2025-08-29 15:24:36.076080 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-08-29 15:24:36.076093 | orchestrator | + echo 2025-08-29 15:24:36.076103 | orchestrator | + osism container testbed-node-1 ps 2025-08-29 15:24:38.426408 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:24:38.426513 | orchestrator | 365a6d7228bf registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 15:24:38.426535 | orchestrator | 90084803c01e registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 15:24:38.426549 | orchestrator | 5bee1d0dea72 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 15:24:38.426563 | orchestrator | 26b0feb6d606 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-08-29 15:24:38.426577 | orchestrator | eaaf541cf7a6 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 15:24:38.426638 | orchestrator | e53d66541332 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-08-29 15:24:38.426726 | orchestrator | 485be19d6cef registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-08-29 15:24:38.426749 | orchestrator | 22b09b430044 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-08-29 15:24:38.426764 | orchestrator | 7ecdf933979b registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-08-29 15:24:38.426780 | orchestrator | 0f10faec338f registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-08-29 15:24:38.426789 | orchestrator | a599dd006e1a registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 15:24:38.426821 | orchestrator | cfe475f74e06 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-08-29 15:24:38.426829 | orchestrator | b8a85b5f56f0 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2025-08-29 15:24:38.426837 | orchestrator | 4848e6afe3d1 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-08-29 15:24:38.426845 | orchestrator | 351d4c38fd01 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-08-29 15:24:38.426852 | orchestrator | c02ee35b1386 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-08-29 15:24:38.426860 | orchestrator | 2cad8b9856e4 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-08-29 15:24:38.426868 | orchestrator | b54b729da5fb registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-08-29 15:24:38.426894 | orchestrator | d2a428071901 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-08-29 15:24:38.426960 | orchestrator | a561f1f283ab registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-08-29 15:24:38.426972 | orchestrator | 05170fd1662b registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-08-29 15:24:38.426981 | orchestrator | 419a2747b8f7 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-08-29 15:24:38.426990 | orchestrator | bed962f17fb0 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-08-29 15:24:38.427003 | orchestrator | 9f0d898ce38a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-08-29 15:24:38.427170 | orchestrator | 777dfa986041 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2025-08-29 15:24:38.427188 | orchestrator | a99522d36603 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-08-29 15:24:38.427197 | orchestrator | db1890051eeb registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-08-29 15:24:38.427204 | orchestrator | be94b6109a00 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-08-29 15:24:38.427212 | orchestrator | 498ad7355146 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_scheduler 2025-08-29 15:24:38.427230 | orchestrator | c55a126cd396 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-08-29 15:24:38.427239 | orchestrator | 5cd5511e4c4a registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_api 2025-08-29 15:24:38.427246 | orchestrator | 67daeb3ab9f4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-1 2025-08-29 15:24:38.427254 | orchestrator | 975713475f5c registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-08-29 15:24:38.427262 | orchestrator | 637a139acb56 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2025-08-29 15:24:38.427270 | orchestrator | 259643846f20 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2025-08-29 15:24:38.427277 | orchestrator | 7e97ce43688b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2025-08-29 15:24:38.427285 | orchestrator | 51df0b36ba87 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2025-08-29 15:24:38.427293 | orchestrator | cb99d72fa1cb registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2025-08-29 15:24:38.427300 | orchestrator | 97547ed48c9a registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2025-08-29 15:24:38.427308 | orchestrator | f0e54539a280 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-1 2025-08-29 15:24:38.427316 | orchestrator | 1a4929ef2d8c registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-08-29 15:24:38.427331 | orchestrator | cc71fc0c62d4 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2025-08-29 15:24:38.427339 | orchestrator | b0dbfefe3dd8 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) haproxy 2025-08-29 15:24:38.427347 | orchestrator | 87fea954fdaf registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2025-08-29 15:24:38.427355 | orchestrator | 13f40644d590 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2025-08-29 15:24:38.427369 | orchestrator | 73ec606c23a5 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes ovn_nb_db 2025-08-29 15:24:38.427377 | orchestrator | 3f70a98a7663 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2025-08-29 15:24:38.427385 | orchestrator | ff941c87f5d5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 31 minutes ago Up 31 minutes ceph-mon-testbed-node-1 2025-08-29 15:24:38.427398 | orchestrator | 5f20792eac0f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2025-08-29 15:24:38.427406 | orchestrator | 7d55d45ec0a8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_vswitchd 2025-08-29 15:24:38.427413 | orchestrator | ad3fca812b1d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2025-08-29 15:24:38.427421 | orchestrator | 56e10762b4ff registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) redis_sentinel 2025-08-29 15:24:38.427429 | orchestrator | 3605437fc53a registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) redis 2025-08-29 15:24:38.427437 | orchestrator | 557f9622e6d9 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) memcached 2025-08-29 15:24:38.427445 | orchestrator | 696a2bf511ef registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2025-08-29 15:24:38.427453 | orchestrator | 2f5ab9c00e09 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 34 minutes ago Up 34 minutes kolla_toolbox 2025-08-29 15:24:38.427460 | orchestrator | 60328e3f2d78 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-08-29 15:24:38.789633 | orchestrator | 2025-08-29 15:24:38.789751 | orchestrator | ## Images @ testbed-node-1 2025-08-29 15:24:38.789775 | orchestrator | 2025-08-29 15:24:38.789792 | orchestrator | + echo 2025-08-29 15:24:38.789810 | orchestrator | + echo '## Images @ testbed-node-1' 2025-08-29 15:24:38.789828 | orchestrator | + echo 2025-08-29 15:24:38.789845 | orchestrator | + osism container testbed-node-1 images 2025-08-29 15:24:41.193890 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:24:41.194053 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:24:41.194065 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 15:24:41.194072 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 15:24:41.194079 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 15:24:41.194085 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 15:24:41.194091 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 15:24:41.194098 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 15:24:41.194848 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 15:24:41.194863 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:24:41.194870 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 15:24:41.194896 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:24:41.194903 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 15:24:41.194909 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 15:24:41.194916 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 15:24:41.194953 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 15:24:41.194963 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:24:41.194973 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 15:24:41.194984 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:24:41.194993 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 15:24:41.195002 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 15:24:41.195013 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 15:24:41.195022 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 15:24:41.195028 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 15:24:41.195034 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 15:24:41.195044 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 15:24:41.195050 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 15:24:41.195056 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 15:24:41.195063 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 15:24:41.195069 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 15:24:41.195075 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 15:24:41.195081 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 15:24:41.195100 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 15:24:41.195107 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 15:24:41.195113 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 15:24:41.195119 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 15:24:41.195125 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 15:24:41.195137 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 15:24:41.195143 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 15:24:41.195149 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 15:24:41.195155 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 15:24:41.195160 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 15:24:41.195166 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 15:24:41.195173 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 15:24:41.195180 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 15:24:41.195186 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 15:24:41.195192 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 15:24:41.195198 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 15:24:41.195204 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 15:24:41.195210 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 15:24:41.195216 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 15:24:41.195222 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 15:24:41.195228 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 15:24:41.195234 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 15:24:41.195240 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 15:24:41.195246 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 15:24:41.531461 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:24:41.532373 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:24:41.606491 | orchestrator | 2025-08-29 15:24:41.606581 | orchestrator | ## Containers @ testbed-node-2 2025-08-29 15:24:41.606595 | orchestrator | 2025-08-29 15:24:41.606607 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:24:41.606618 | orchestrator | + echo 2025-08-29 15:24:41.606630 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-08-29 15:24:41.606642 | orchestrator | + echo 2025-08-29 15:24:41.606653 | orchestrator | + osism container testbed-node-2 ps 2025-08-29 15:24:44.013805 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:24:44.013922 | orchestrator | 96d2d358dd48 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 15:24:44.014011 | orchestrator | a115c1a44611 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 15:24:44.014082 | orchestrator | 624d097d33f2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 15:24:44.014088 | orchestrator | 21e91d80a792 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-08-29 15:24:44.014094 | orchestrator | ee716b2f6843 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 15:24:44.014100 | orchestrator | 8bd0676d427f registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-08-29 15:24:44.014106 | orchestrator | 142672bdc9ba registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-08-29 15:24:44.014112 | orchestrator | ce192fd29040 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-08-29 15:24:44.014116 | orchestrator | edc40bee17f9 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-08-29 15:24:44.014121 | orchestrator | 91aa5129af55 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-08-29 15:24:44.014126 | orchestrator | 5528d942ffe7 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-08-29 15:24:44.014131 | orchestrator | 0f3f46e4fb74 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-08-29 15:24:44.014136 | orchestrator | 1b2c23f91d19 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2025-08-29 15:24:44.014156 | orchestrator | bd9f82276087 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-08-29 15:24:44.014161 | orchestrator | e97cf164e576 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-08-29 15:24:44.014166 | orchestrator | 43491a94f3b5 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-08-29 15:24:44.014171 | orchestrator | 52563debcf83 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-08-29 15:24:44.014176 | orchestrator | 52023f8fcc27 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-08-29 15:24:44.014181 | orchestrator | 46f9c44c95bd registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-08-29 15:24:44.014200 | orchestrator | d87d7edc8aa7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-08-29 15:24:44.014210 | orchestrator | 1f3f594f2108 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-08-29 15:24:44.014215 | orchestrator | 0e00764a0270 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-08-29 15:24:44.014219 | orchestrator | 072c24ebaa2c registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-08-29 15:24:44.014224 | orchestrator | 8b1f63401bfa registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-08-29 15:24:44.014231 | orchestrator | bf3ec1c13882 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2025-08-29 15:24:44.014235 | orchestrator | 2eca71b6ff8d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-08-29 15:24:44.014240 | orchestrator | 74d539e3d327 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-08-29 15:24:44.014245 | orchestrator | ba32e24aa714 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 17 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-08-29 15:24:44.014250 | orchestrator | 021607a9b466 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_scheduler 2025-08-29 15:24:44.014255 | orchestrator | d8ab212786be registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-08-29 15:24:44.014260 | orchestrator | 46d0f75f7905 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_api 2025-08-29 15:24:44.014265 | orchestrator | c34a1e1625aa registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-2 2025-08-29 15:24:44.014269 | orchestrator | 73038f072788 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone 2025-08-29 15:24:44.014280 | orchestrator | 2651c800ec6f registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2025-08-29 15:24:44.014287 | orchestrator | 4c727931866e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2025-08-29 15:24:44.014295 | orchestrator | b725e462cf3f registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2025-08-29 15:24:44.014302 | orchestrator | cf2ddbb3ffcc registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2025-08-29 15:24:44.014311 | orchestrator | 9b53e89be5f2 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2025-08-29 15:24:44.014326 | orchestrator | 13cf4257c933 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2025-08-29 15:24:44.014333 | orchestrator | 9e91e8b71d38 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-2 2025-08-29 15:24:44.014350 | orchestrator | b9cb7e872081 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-08-29 15:24:44.014358 | orchestrator | ffcfce19c2d6 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) proxysql 2025-08-29 15:24:44.014365 | orchestrator | 3d072e496795 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) haproxy 2025-08-29 15:24:44.014373 | orchestrator | 37656425102c registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2025-08-29 15:24:44.014380 | orchestrator | cae9a864261a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes ovn_sb_db 2025-08-29 15:24:44.014387 | orchestrator | 3076be79b0ed registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes ovn_nb_db 2025-08-29 15:24:44.014395 | orchestrator | a2290779e72a registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2025-08-29 15:24:44.014402 | orchestrator | b6b125f448d2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2025-08-29 15:24:44.014412 | orchestrator | 6d4c69d946df registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 31 minutes ago Up 31 minutes ceph-mon-testbed-node-2 2025-08-29 15:24:44.014420 | orchestrator | 2c9810319f85 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_vswitchd 2025-08-29 15:24:44.014428 | orchestrator | 43d70ca655c9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) openvswitch_db 2025-08-29 15:24:44.014435 | orchestrator | cdfeed54da0c registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) redis_sentinel 2025-08-29 15:24:44.014444 | orchestrator | 6114f18a3ece registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) redis 2025-08-29 15:24:44.014507 | orchestrator | 0302219df9d2 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) memcached 2025-08-29 15:24:44.014521 | orchestrator | 5d6f213b799c registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2025-08-29 15:24:44.014530 | orchestrator | ca8a32587980 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 34 minutes ago Up 34 minutes kolla_toolbox 2025-08-29 15:24:44.014539 | orchestrator | d371ef95d434 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-08-29 15:24:44.400800 | orchestrator | 2025-08-29 15:24:44.400892 | orchestrator | ## Images @ testbed-node-2 2025-08-29 15:24:44.400906 | orchestrator | 2025-08-29 15:24:44.400918 | orchestrator | + echo 2025-08-29 15:24:44.400928 | orchestrator | + echo '## Images @ testbed-node-2' 2025-08-29 15:24:44.400988 | orchestrator | + echo 2025-08-29 15:24:44.400998 | orchestrator | + osism container testbed-node-2 images 2025-08-29 15:24:46.827547 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:24:46.827659 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:24:46.827675 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 15:24:46.827687 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 15:24:46.827698 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 15:24:46.827709 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 15:24:46.827720 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 15:24:46.827730 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 15:24:46.827741 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 15:24:46.827752 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:24:46.827763 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 15:24:46.827773 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:24:46.827784 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 15:24:46.827795 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 15:24:46.827813 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 15:24:46.827834 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 15:24:46.827853 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:24:46.827873 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 15:24:46.827891 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:24:46.827910 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 15:24:46.828025 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 15:24:46.828051 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 15:24:46.828069 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 15:24:46.828087 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 15:24:46.828134 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 15:24:46.828155 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 15:24:46.828174 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 15:24:46.828192 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 15:24:46.828210 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 15:24:46.828221 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 15:24:46.828232 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 15:24:46.828242 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 15:24:46.828273 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 15:24:46.828285 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 15:24:46.828295 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 15:24:46.828306 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 15:24:46.828316 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 15:24:46.828327 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 15:24:46.828345 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 15:24:46.828356 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 15:24:46.828366 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 15:24:46.828376 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 15:24:46.828387 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 15:24:46.828397 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 15:24:46.828408 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 15:24:46.828418 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 15:24:46.828429 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 15:24:46.828440 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 15:24:46.828451 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 15:24:46.828462 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 15:24:46.828481 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 15:24:46.828492 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 15:24:46.828503 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 15:24:46.828513 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 15:24:46.828524 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 15:24:46.828534 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 15:24:47.151801 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-08-29 15:24:47.167699 | orchestrator | + set -e 2025-08-29 15:24:47.167787 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 15:24:47.171098 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 15:24:47.171155 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 15:24:47.171165 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 15:24:47.171171 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 15:24:47.171178 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 15:24:47.171186 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 15:24:47.171192 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:24:47.171198 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:24:47.171204 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 15:24:47.171209 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 15:24:47.171215 | orchestrator | ++ export ARA=false 2025-08-29 15:24:47.171221 | orchestrator | ++ ARA=false 2025-08-29 15:24:47.171226 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 15:24:47.171232 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 15:24:47.171540 | orchestrator | ++ export TEMPEST=false 2025-08-29 15:24:47.171558 | orchestrator | ++ TEMPEST=false 2025-08-29 15:24:47.171566 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 15:24:47.171572 | orchestrator | ++ IS_ZUUL=true 2025-08-29 15:24:47.171579 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 15:24:47.171589 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 15:24:47.171595 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 15:24:47.171601 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 15:24:47.171606 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 15:24:47.171611 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 15:24:47.171617 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 15:24:47.171622 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 15:24:47.171628 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 15:24:47.171633 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 15:24:47.172420 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 15:24:47.172480 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-08-29 15:24:47.183922 | orchestrator | + set -e 2025-08-29 15:24:47.184027 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 15:24:47.184037 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 15:24:47.184045 | orchestrator | ++ INTERACTIVE=false 2025-08-29 15:24:47.184051 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 15:24:47.184058 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 15:24:47.184065 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 15:24:47.185139 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 15:24:47.191623 | orchestrator | 2025-08-29 15:24:47.191675 | orchestrator | # Ceph status 2025-08-29 15:24:47.191683 | orchestrator | 2025-08-29 15:24:47.191688 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:24:47.191695 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:24:47.191701 | orchestrator | + echo 2025-08-29 15:24:47.191707 | orchestrator | + echo '# Ceph status' 2025-08-29 15:24:47.191713 | orchestrator | + echo 2025-08-29 15:24:47.191719 | orchestrator | + ceph -s 2025-08-29 15:24:47.876543 | orchestrator | cluster: 2025-08-29 15:24:47.876645 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-08-29 15:24:47.876660 | orchestrator | health: HEALTH_OK 2025-08-29 15:24:47.876672 | orchestrator | 2025-08-29 15:24:47.876684 | orchestrator | services: 2025-08-29 15:24:47.876722 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 31m) 2025-08-29 15:24:47.876748 | orchestrator | mgr: testbed-node-0(active, since 19m), standbys: testbed-node-2, testbed-node-1 2025-08-29 15:24:47.876760 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-08-29 15:24:47.876771 | orchestrator | osd: 6 osds: 6 up (since 27m), 6 in (since 28m) 2025-08-29 15:24:47.876782 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-08-29 15:24:47.876793 | orchestrator | 2025-08-29 15:24:47.876804 | orchestrator | data: 2025-08-29 15:24:47.876814 | orchestrator | volumes: 1/1 healthy 2025-08-29 15:24:47.876826 | orchestrator | pools: 14 pools, 401 pgs 2025-08-29 15:24:47.876836 | orchestrator | objects: 524 objects, 2.2 GiB 2025-08-29 15:24:47.876848 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-08-29 15:24:47.876858 | orchestrator | pgs: 401 active+clean 2025-08-29 15:24:47.876869 | orchestrator | 2025-08-29 15:24:47.926781 | orchestrator | 2025-08-29 15:24:47.926864 | orchestrator | # Ceph versions 2025-08-29 15:24:47.926876 | orchestrator | 2025-08-29 15:24:47.926888 | orchestrator | + echo 2025-08-29 15:24:47.926899 | orchestrator | + echo '# Ceph versions' 2025-08-29 15:24:47.926910 | orchestrator | + echo 2025-08-29 15:24:47.926921 | orchestrator | + ceph versions 2025-08-29 15:24:48.542653 | orchestrator | { 2025-08-29 15:24:48.542739 | orchestrator | "mon": { 2025-08-29 15:24:48.542750 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:24:48.542759 | orchestrator | }, 2025-08-29 15:24:48.542766 | orchestrator | "mgr": { 2025-08-29 15:24:48.542774 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:24:48.542781 | orchestrator | }, 2025-08-29 15:24:48.542788 | orchestrator | "osd": { 2025-08-29 15:24:48.542796 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-08-29 15:24:48.542803 | orchestrator | }, 2025-08-29 15:24:48.542810 | orchestrator | "mds": { 2025-08-29 15:24:48.542817 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:24:48.542825 | orchestrator | }, 2025-08-29 15:24:48.542832 | orchestrator | "rgw": { 2025-08-29 15:24:48.542839 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:24:48.542846 | orchestrator | }, 2025-08-29 15:24:48.542853 | orchestrator | "overall": { 2025-08-29 15:24:48.542860 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-08-29 15:24:48.542867 | orchestrator | } 2025-08-29 15:24:48.542875 | orchestrator | } 2025-08-29 15:24:48.593185 | orchestrator | 2025-08-29 15:24:48.593269 | orchestrator | # Ceph OSD tree 2025-08-29 15:24:48.593278 | orchestrator | 2025-08-29 15:24:48.593286 | orchestrator | + echo 2025-08-29 15:24:48.593294 | orchestrator | + echo '# Ceph OSD tree' 2025-08-29 15:24:48.593302 | orchestrator | + echo 2025-08-29 15:24:48.593310 | orchestrator | + ceph osd df tree 2025-08-29 15:24:49.128678 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-08-29 15:24:49.128795 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-08-29 15:24:49.128810 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-08-29 15:24:49.128821 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 74 MiB 19 GiB 5.30 0.90 190 up osd.0 2025-08-29 15:24:49.128832 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.53 1.10 202 up osd.4 2025-08-29 15:24:49.128843 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-08-29 15:24:49.128870 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 984 MiB 915 MiB 1 KiB 70 MiB 19 GiB 4.81 0.81 209 up osd.1 2025-08-29 15:24:49.128883 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.02 1.19 181 up osd.3 2025-08-29 15:24:49.128894 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-08-29 15:24:49.128927 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.69 1.13 191 up osd.2 2025-08-29 15:24:49.128939 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 74 MiB 19 GiB 5.15 0.87 197 up osd.5 2025-08-29 15:24:49.128996 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-08-29 15:24:49.129007 | orchestrator | MIN/MAX VAR: 0.81/1.19 STDDEV: 0.86 2025-08-29 15:24:49.181039 | orchestrator | 2025-08-29 15:24:49.181144 | orchestrator | # Ceph monitor status 2025-08-29 15:24:49.181164 | orchestrator | 2025-08-29 15:24:49.181180 | orchestrator | + echo 2025-08-29 15:24:49.181194 | orchestrator | + echo '# Ceph monitor status' 2025-08-29 15:24:49.181207 | orchestrator | + echo 2025-08-29 15:24:49.181215 | orchestrator | + ceph mon stat 2025-08-29 15:24:49.792493 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-08-29 15:24:49.853082 | orchestrator | 2025-08-29 15:24:49.853192 | orchestrator | # Ceph quorum status 2025-08-29 15:24:49.853216 | orchestrator | 2025-08-29 15:24:49.853351 | orchestrator | + echo 2025-08-29 15:24:49.853370 | orchestrator | + echo '# Ceph quorum status' 2025-08-29 15:24:49.853419 | orchestrator | + echo 2025-08-29 15:24:49.853435 | orchestrator | + ceph quorum_status 2025-08-29 15:24:49.853465 | orchestrator | + jq 2025-08-29 15:24:50.513258 | orchestrator | { 2025-08-29 15:24:50.513338 | orchestrator | "election_epoch": 8, 2025-08-29 15:24:50.513347 | orchestrator | "quorum": [ 2025-08-29 15:24:50.513353 | orchestrator | 0, 2025-08-29 15:24:50.513359 | orchestrator | 1, 2025-08-29 15:24:50.513364 | orchestrator | 2 2025-08-29 15:24:50.513370 | orchestrator | ], 2025-08-29 15:24:50.513376 | orchestrator | "quorum_names": [ 2025-08-29 15:24:50.513381 | orchestrator | "testbed-node-0", 2025-08-29 15:24:50.513387 | orchestrator | "testbed-node-1", 2025-08-29 15:24:50.513465 | orchestrator | "testbed-node-2" 2025-08-29 15:24:50.513559 | orchestrator | ], 2025-08-29 15:24:50.513568 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-08-29 15:24:50.513575 | orchestrator | "quorum_age": 1887, 2025-08-29 15:24:50.513580 | orchestrator | "features": { 2025-08-29 15:24:50.513586 | orchestrator | "quorum_con": "4540138322906710015", 2025-08-29 15:24:50.513592 | orchestrator | "quorum_mon": [ 2025-08-29 15:24:50.513597 | orchestrator | "kraken", 2025-08-29 15:24:50.513603 | orchestrator | "luminous", 2025-08-29 15:24:50.513608 | orchestrator | "mimic", 2025-08-29 15:24:50.513614 | orchestrator | "osdmap-prune", 2025-08-29 15:24:50.513619 | orchestrator | "nautilus", 2025-08-29 15:24:50.513625 | orchestrator | "octopus", 2025-08-29 15:24:50.513630 | orchestrator | "pacific", 2025-08-29 15:24:50.513636 | orchestrator | "elector-pinging", 2025-08-29 15:24:50.513641 | orchestrator | "quincy", 2025-08-29 15:24:50.513647 | orchestrator | "reef" 2025-08-29 15:24:50.513652 | orchestrator | ] 2025-08-29 15:24:50.513658 | orchestrator | }, 2025-08-29 15:24:50.513663 | orchestrator | "monmap": { 2025-08-29 15:24:50.513669 | orchestrator | "epoch": 1, 2025-08-29 15:24:50.513674 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-08-29 15:24:50.513681 | orchestrator | "modified": "2025-08-29T14:53:04.293842Z", 2025-08-29 15:24:50.513686 | orchestrator | "created": "2025-08-29T14:53:04.293842Z", 2025-08-29 15:24:50.513692 | orchestrator | "min_mon_release": 18, 2025-08-29 15:24:50.513697 | orchestrator | "min_mon_release_name": "reef", 2025-08-29 15:24:50.513702 | orchestrator | "election_strategy": 1, 2025-08-29 15:24:50.513708 | orchestrator | "disallowed_leaders: ": "", 2025-08-29 15:24:50.513713 | orchestrator | "stretch_mode": false, 2025-08-29 15:24:50.513718 | orchestrator | "tiebreaker_mon": "", 2025-08-29 15:24:50.513724 | orchestrator | "removed_ranks: ": "", 2025-08-29 15:24:50.513729 | orchestrator | "features": { 2025-08-29 15:24:50.513734 | orchestrator | "persistent": [ 2025-08-29 15:24:50.513739 | orchestrator | "kraken", 2025-08-29 15:24:50.513745 | orchestrator | "luminous", 2025-08-29 15:24:50.513750 | orchestrator | "mimic", 2025-08-29 15:24:50.513755 | orchestrator | "osdmap-prune", 2025-08-29 15:24:50.513761 | orchestrator | "nautilus", 2025-08-29 15:24:50.513766 | orchestrator | "octopus", 2025-08-29 15:24:50.513788 | orchestrator | "pacific", 2025-08-29 15:24:50.513794 | orchestrator | "elector-pinging", 2025-08-29 15:24:50.513813 | orchestrator | "quincy", 2025-08-29 15:24:50.513819 | orchestrator | "reef" 2025-08-29 15:24:50.513824 | orchestrator | ], 2025-08-29 15:24:50.513829 | orchestrator | "optional": [] 2025-08-29 15:24:50.513835 | orchestrator | }, 2025-08-29 15:24:50.513840 | orchestrator | "mons": [ 2025-08-29 15:24:50.513845 | orchestrator | { 2025-08-29 15:24:50.513851 | orchestrator | "rank": 0, 2025-08-29 15:24:50.513856 | orchestrator | "name": "testbed-node-0", 2025-08-29 15:24:50.513861 | orchestrator | "public_addrs": { 2025-08-29 15:24:50.513868 | orchestrator | "addrvec": [ 2025-08-29 15:24:50.513873 | orchestrator | { 2025-08-29 15:24:50.513879 | orchestrator | "type": "v2", 2025-08-29 15:24:50.513884 | orchestrator | "addr": "192.168.16.10:3300", 2025-08-29 15:24:50.513889 | orchestrator | "nonce": 0 2025-08-29 15:24:50.513894 | orchestrator | }, 2025-08-29 15:24:50.513900 | orchestrator | { 2025-08-29 15:24:50.513905 | orchestrator | "type": "v1", 2025-08-29 15:24:50.513911 | orchestrator | "addr": "192.168.16.10:6789", 2025-08-29 15:24:50.513916 | orchestrator | "nonce": 0 2025-08-29 15:24:50.513921 | orchestrator | } 2025-08-29 15:24:50.513926 | orchestrator | ] 2025-08-29 15:24:50.513932 | orchestrator | }, 2025-08-29 15:24:50.513937 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-08-29 15:24:50.513943 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-08-29 15:24:50.513963 | orchestrator | "priority": 0, 2025-08-29 15:24:50.513969 | orchestrator | "weight": 0, 2025-08-29 15:24:50.513974 | orchestrator | "crush_location": "{}" 2025-08-29 15:24:50.513979 | orchestrator | }, 2025-08-29 15:24:50.513985 | orchestrator | { 2025-08-29 15:24:50.513990 | orchestrator | "rank": 1, 2025-08-29 15:24:50.513995 | orchestrator | "name": "testbed-node-1", 2025-08-29 15:24:50.514000 | orchestrator | "public_addrs": { 2025-08-29 15:24:50.514006 | orchestrator | "addrvec": [ 2025-08-29 15:24:50.514078 | orchestrator | { 2025-08-29 15:24:50.514087 | orchestrator | "type": "v2", 2025-08-29 15:24:50.514093 | orchestrator | "addr": "192.168.16.11:3300", 2025-08-29 15:24:50.514098 | orchestrator | "nonce": 0 2025-08-29 15:24:50.514103 | orchestrator | }, 2025-08-29 15:24:50.514109 | orchestrator | { 2025-08-29 15:24:50.514114 | orchestrator | "type": "v1", 2025-08-29 15:24:50.514119 | orchestrator | "addr": "192.168.16.11:6789", 2025-08-29 15:24:50.514125 | orchestrator | "nonce": 0 2025-08-29 15:24:50.514130 | orchestrator | } 2025-08-29 15:24:50.514135 | orchestrator | ] 2025-08-29 15:24:50.514141 | orchestrator | }, 2025-08-29 15:24:50.514146 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-08-29 15:24:50.514151 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-08-29 15:24:50.514157 | orchestrator | "priority": 0, 2025-08-29 15:24:50.514162 | orchestrator | "weight": 0, 2025-08-29 15:24:50.514167 | orchestrator | "crush_location": "{}" 2025-08-29 15:24:50.514172 | orchestrator | }, 2025-08-29 15:24:50.514178 | orchestrator | { 2025-08-29 15:24:50.514183 | orchestrator | "rank": 2, 2025-08-29 15:24:50.514188 | orchestrator | "name": "testbed-node-2", 2025-08-29 15:24:50.514194 | orchestrator | "public_addrs": { 2025-08-29 15:24:50.514200 | orchestrator | "addrvec": [ 2025-08-29 15:24:50.514206 | orchestrator | { 2025-08-29 15:24:50.514213 | orchestrator | "type": "v2", 2025-08-29 15:24:50.514219 | orchestrator | "addr": "192.168.16.12:3300", 2025-08-29 15:24:50.514225 | orchestrator | "nonce": 0 2025-08-29 15:24:50.514231 | orchestrator | }, 2025-08-29 15:24:50.514237 | orchestrator | { 2025-08-29 15:24:50.514243 | orchestrator | "type": "v1", 2025-08-29 15:24:50.514249 | orchestrator | "addr": "192.168.16.12:6789", 2025-08-29 15:24:50.514255 | orchestrator | "nonce": 0 2025-08-29 15:24:50.514261 | orchestrator | } 2025-08-29 15:24:50.514267 | orchestrator | ] 2025-08-29 15:24:50.514273 | orchestrator | }, 2025-08-29 15:24:50.514280 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-08-29 15:24:50.514286 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-08-29 15:24:50.514292 | orchestrator | "priority": 0, 2025-08-29 15:24:50.514298 | orchestrator | "weight": 0, 2025-08-29 15:24:50.514304 | orchestrator | "crush_location": "{}" 2025-08-29 15:24:50.514311 | orchestrator | } 2025-08-29 15:24:50.514317 | orchestrator | ] 2025-08-29 15:24:50.514323 | orchestrator | } 2025-08-29 15:24:50.514329 | orchestrator | } 2025-08-29 15:24:50.514345 | orchestrator | 2025-08-29 15:24:50.514351 | orchestrator | + echo 2025-08-29 15:24:50.514363 | orchestrator | # Ceph free space status 2025-08-29 15:24:50.514369 | orchestrator | 2025-08-29 15:24:50.514375 | orchestrator | + echo '# Ceph free space status' 2025-08-29 15:24:50.514382 | orchestrator | + echo 2025-08-29 15:24:50.514388 | orchestrator | + ceph df 2025-08-29 15:24:51.174908 | orchestrator | --- RAW STORAGE --- 2025-08-29 15:24:51.175035 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-08-29 15:24:51.175063 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 15:24:51.175074 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 15:24:51.175084 | orchestrator | 2025-08-29 15:24:51.175094 | orchestrator | --- POOLS --- 2025-08-29 15:24:51.175104 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-08-29 15:24:51.175116 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-08-29 15:24:51.175125 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:24:51.175136 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-08-29 15:24:51.175146 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:24:51.175155 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:24:51.175165 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-08-29 15:24:51.175174 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-08-29 15:24:51.175184 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:24:51.175193 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-08-29 15:24:51.175203 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:24:51.175212 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:24:51.175222 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2025-08-29 15:24:51.175231 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:24:51.175241 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:24:51.227492 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:24:51.285718 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:24:51.285871 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-08-29 15:24:51.285897 | orchestrator | + osism apply facts 2025-08-29 15:24:53.273514 | orchestrator | 2025-08-29 15:24:53 | INFO  | Task 20676b31-c72b-47e8-94a9-317adc66b2bd (facts) was prepared for execution. 2025-08-29 15:24:53.273621 | orchestrator | 2025-08-29 15:24:53 | INFO  | It takes a moment until task 20676b31-c72b-47e8-94a9-317adc66b2bd (facts) has been started and output is visible here. 2025-08-29 15:25:06.704072 | orchestrator | 2025-08-29 15:25:06.704202 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 15:25:06.704225 | orchestrator | 2025-08-29 15:25:06.704235 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 15:25:06.704244 | orchestrator | Friday 29 August 2025 15:24:57 +0000 (0:00:00.282) 0:00:00.282 ********* 2025-08-29 15:25:06.704252 | orchestrator | ok: [testbed-manager] 2025-08-29 15:25:06.704262 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:06.704271 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:06.704281 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:06.704291 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:06.704301 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:06.704314 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:06.704331 | orchestrator | 2025-08-29 15:25:06.704349 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 15:25:06.704365 | orchestrator | Friday 29 August 2025 15:24:59 +0000 (0:00:01.545) 0:00:01.827 ********* 2025-08-29 15:25:06.704382 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:25:06.704397 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:06.704411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:25:06.704457 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:25:06.704475 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:06.704489 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:06.704505 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:06.704520 | orchestrator | 2025-08-29 15:25:06.704536 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 15:25:06.704553 | orchestrator | 2025-08-29 15:25:06.704569 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 15:25:06.704586 | orchestrator | Friday 29 August 2025 15:25:00 +0000 (0:00:01.437) 0:00:03.265 ********* 2025-08-29 15:25:06.704603 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:06.704618 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:06.704635 | orchestrator | ok: [testbed-manager] 2025-08-29 15:25:06.704651 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:06.704668 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:06.704685 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:06.704702 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:06.704720 | orchestrator | 2025-08-29 15:25:06.704734 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 15:25:06.704746 | orchestrator | 2025-08-29 15:25:06.704757 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 15:25:06.704769 | orchestrator | Friday 29 August 2025 15:25:05 +0000 (0:00:05.079) 0:00:08.345 ********* 2025-08-29 15:25:06.704781 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:25:06.704792 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:06.704803 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:25:06.704814 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:25:06.704825 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:06.704836 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:06.704848 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:06.704859 | orchestrator | 2025-08-29 15:25:06.704871 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:25:06.704882 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704896 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704907 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704918 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704930 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704942 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704951 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:06.704961 | orchestrator | 2025-08-29 15:25:06.704970 | orchestrator | 2025-08-29 15:25:06.705042 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:25:06.705072 | orchestrator | Friday 29 August 2025 15:25:06 +0000 (0:00:00.580) 0:00:08.925 ********* 2025-08-29 15:25:06.705082 | orchestrator | =============================================================================== 2025-08-29 15:25:06.705092 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.08s 2025-08-29 15:25:06.705102 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.55s 2025-08-29 15:25:06.705111 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.44s 2025-08-29 15:25:06.705121 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-08-29 15:25:07.066875 | orchestrator | + osism validate ceph-mons 2025-08-29 15:25:39.748741 | orchestrator | 2025-08-29 15:25:39.748850 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-08-29 15:25:39.748868 | orchestrator | 2025-08-29 15:25:39.748881 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 15:25:39.748893 | orchestrator | Friday 29 August 2025 15:25:23 +0000 (0:00:00.456) 0:00:00.456 ********* 2025-08-29 15:25:39.748904 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:39.748915 | orchestrator | 2025-08-29 15:25:39.748926 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 15:25:39.748937 | orchestrator | Friday 29 August 2025 15:25:24 +0000 (0:00:00.741) 0:00:01.198 ********* 2025-08-29 15:25:39.748947 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:39.748958 | orchestrator | 2025-08-29 15:25:39.748969 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 15:25:39.748980 | orchestrator | Friday 29 August 2025 15:25:25 +0000 (0:00:00.984) 0:00:02.183 ********* 2025-08-29 15:25:39.748991 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749002 | orchestrator | 2025-08-29 15:25:39.749013 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 15:25:39.749080 | orchestrator | Friday 29 August 2025 15:25:25 +0000 (0:00:00.295) 0:00:02.478 ********* 2025-08-29 15:25:39.749093 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749104 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:39.749115 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:39.749126 | orchestrator | 2025-08-29 15:25:39.749137 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 15:25:39.749148 | orchestrator | Friday 29 August 2025 15:25:25 +0000 (0:00:00.301) 0:00:02.779 ********* 2025-08-29 15:25:39.749159 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:39.749169 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749180 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:39.749191 | orchestrator | 2025-08-29 15:25:39.749202 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 15:25:39.749213 | orchestrator | Friday 29 August 2025 15:25:26 +0000 (0:00:01.059) 0:00:03.839 ********* 2025-08-29 15:25:39.749223 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:25:39.749245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:25:39.749258 | orchestrator | 2025-08-29 15:25:39.749270 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 15:25:39.749283 | orchestrator | Friday 29 August 2025 15:25:27 +0000 (0:00:00.314) 0:00:04.153 ********* 2025-08-29 15:25:39.749295 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749307 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:39.749320 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:39.749332 | orchestrator | 2025-08-29 15:25:39.749345 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:25:39.749356 | orchestrator | Friday 29 August 2025 15:25:27 +0000 (0:00:00.541) 0:00:04.695 ********* 2025-08-29 15:25:39.749369 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749381 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:39.749393 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:39.749405 | orchestrator | 2025-08-29 15:25:39.749418 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-08-29 15:25:39.749430 | orchestrator | Friday 29 August 2025 15:25:28 +0000 (0:00:00.310) 0:00:05.006 ********* 2025-08-29 15:25:39.749442 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749455 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:25:39.749467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:25:39.749479 | orchestrator | 2025-08-29 15:25:39.749492 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-08-29 15:25:39.749504 | orchestrator | Friday 29 August 2025 15:25:28 +0000 (0:00:00.308) 0:00:05.315 ********* 2025-08-29 15:25:39.749536 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749549 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:25:39.749561 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:25:39.749575 | orchestrator | 2025-08-29 15:25:39.749587 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:25:39.749599 | orchestrator | Friday 29 August 2025 15:25:28 +0000 (0:00:00.315) 0:00:05.630 ********* 2025-08-29 15:25:39.749612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749625 | orchestrator | 2025-08-29 15:25:39.749637 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:25:39.749648 | orchestrator | Friday 29 August 2025 15:25:29 +0000 (0:00:00.741) 0:00:06.371 ********* 2025-08-29 15:25:39.749658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749669 | orchestrator | 2025-08-29 15:25:39.749680 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:25:39.749690 | orchestrator | Friday 29 August 2025 15:25:29 +0000 (0:00:00.298) 0:00:06.670 ********* 2025-08-29 15:25:39.749701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749712 | orchestrator | 2025-08-29 15:25:39.749722 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:39.749733 | orchestrator | Friday 29 August 2025 15:25:29 +0000 (0:00:00.278) 0:00:06.949 ********* 2025-08-29 15:25:39.749743 | orchestrator | 2025-08-29 15:25:39.749754 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:39.749765 | orchestrator | Friday 29 August 2025 15:25:30 +0000 (0:00:00.071) 0:00:07.021 ********* 2025-08-29 15:25:39.749775 | orchestrator | 2025-08-29 15:25:39.749786 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:39.749796 | orchestrator | Friday 29 August 2025 15:25:30 +0000 (0:00:00.071) 0:00:07.092 ********* 2025-08-29 15:25:39.749807 | orchestrator | 2025-08-29 15:25:39.749818 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:25:39.749828 | orchestrator | Friday 29 August 2025 15:25:30 +0000 (0:00:00.096) 0:00:07.189 ********* 2025-08-29 15:25:39.749839 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749849 | orchestrator | 2025-08-29 15:25:39.749860 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 15:25:39.749871 | orchestrator | Friday 29 August 2025 15:25:30 +0000 (0:00:00.260) 0:00:07.449 ********* 2025-08-29 15:25:39.749881 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.749892 | orchestrator | 2025-08-29 15:25:39.749920 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-08-29 15:25:39.749931 | orchestrator | Friday 29 August 2025 15:25:30 +0000 (0:00:00.281) 0:00:07.731 ********* 2025-08-29 15:25:39.749942 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.749952 | orchestrator | 2025-08-29 15:25:39.749963 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-08-29 15:25:39.749974 | orchestrator | Friday 29 August 2025 15:25:30 +0000 (0:00:00.133) 0:00:07.864 ********* 2025-08-29 15:25:39.749984 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:25:39.749995 | orchestrator | 2025-08-29 15:25:39.750006 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-08-29 15:25:39.750135 | orchestrator | Friday 29 August 2025 15:25:32 +0000 (0:00:01.570) 0:00:09.435 ********* 2025-08-29 15:25:39.750153 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750164 | orchestrator | 2025-08-29 15:25:39.750175 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-08-29 15:25:39.750185 | orchestrator | Friday 29 August 2025 15:25:32 +0000 (0:00:00.337) 0:00:09.772 ********* 2025-08-29 15:25:39.750196 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.750207 | orchestrator | 2025-08-29 15:25:39.750218 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-08-29 15:25:39.750236 | orchestrator | Friday 29 August 2025 15:25:33 +0000 (0:00:00.375) 0:00:10.148 ********* 2025-08-29 15:25:39.750258 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750269 | orchestrator | 2025-08-29 15:25:39.750280 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-08-29 15:25:39.750291 | orchestrator | Friday 29 August 2025 15:25:33 +0000 (0:00:00.340) 0:00:10.489 ********* 2025-08-29 15:25:39.750301 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750312 | orchestrator | 2025-08-29 15:25:39.750323 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-08-29 15:25:39.750334 | orchestrator | Friday 29 August 2025 15:25:33 +0000 (0:00:00.329) 0:00:10.818 ********* 2025-08-29 15:25:39.750345 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.750355 | orchestrator | 2025-08-29 15:25:39.750366 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-08-29 15:25:39.750377 | orchestrator | Friday 29 August 2025 15:25:33 +0000 (0:00:00.127) 0:00:10.945 ********* 2025-08-29 15:25:39.750387 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750398 | orchestrator | 2025-08-29 15:25:39.750409 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-08-29 15:25:39.750420 | orchestrator | Friday 29 August 2025 15:25:34 +0000 (0:00:00.128) 0:00:11.074 ********* 2025-08-29 15:25:39.750430 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750441 | orchestrator | 2025-08-29 15:25:39.750452 | orchestrator | TASK [Gather status data] ****************************************************** 2025-08-29 15:25:39.750463 | orchestrator | Friday 29 August 2025 15:25:34 +0000 (0:00:00.150) 0:00:11.225 ********* 2025-08-29 15:25:39.750474 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:25:39.750484 | orchestrator | 2025-08-29 15:25:39.750495 | orchestrator | TASK [Set health test data] **************************************************** 2025-08-29 15:25:39.750506 | orchestrator | Friday 29 August 2025 15:25:35 +0000 (0:00:01.226) 0:00:12.451 ********* 2025-08-29 15:25:39.750517 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750528 | orchestrator | 2025-08-29 15:25:39.750538 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-08-29 15:25:39.750549 | orchestrator | Friday 29 August 2025 15:25:35 +0000 (0:00:00.322) 0:00:12.774 ********* 2025-08-29 15:25:39.750560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.750571 | orchestrator | 2025-08-29 15:25:39.750582 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-08-29 15:25:39.750593 | orchestrator | Friday 29 August 2025 15:25:35 +0000 (0:00:00.140) 0:00:12.915 ********* 2025-08-29 15:25:39.750604 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:25:39.750615 | orchestrator | 2025-08-29 15:25:39.750625 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-08-29 15:25:39.750636 | orchestrator | Friday 29 August 2025 15:25:36 +0000 (0:00:00.151) 0:00:13.066 ********* 2025-08-29 15:25:39.750647 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.750658 | orchestrator | 2025-08-29 15:25:39.750669 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-08-29 15:25:39.750680 | orchestrator | Friday 29 August 2025 15:25:36 +0000 (0:00:00.151) 0:00:13.217 ********* 2025-08-29 15:25:39.750691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.750702 | orchestrator | 2025-08-29 15:25:39.750712 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 15:25:39.750723 | orchestrator | Friday 29 August 2025 15:25:36 +0000 (0:00:00.403) 0:00:13.621 ********* 2025-08-29 15:25:39.750734 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:39.750745 | orchestrator | 2025-08-29 15:25:39.750756 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 15:25:39.750767 | orchestrator | Friday 29 August 2025 15:25:36 +0000 (0:00:00.265) 0:00:13.886 ********* 2025-08-29 15:25:39.750777 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:25:39.750788 | orchestrator | 2025-08-29 15:25:39.750799 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:25:39.750810 | orchestrator | Friday 29 August 2025 15:25:37 +0000 (0:00:00.279) 0:00:14.166 ********* 2025-08-29 15:25:39.750827 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:39.750838 | orchestrator | 2025-08-29 15:25:39.750849 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:25:39.750859 | orchestrator | Friday 29 August 2025 15:25:38 +0000 (0:00:01.726) 0:00:15.893 ********* 2025-08-29 15:25:39.750870 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:39.750881 | orchestrator | 2025-08-29 15:25:39.750891 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:25:39.750902 | orchestrator | Friday 29 August 2025 15:25:39 +0000 (0:00:00.312) 0:00:16.205 ********* 2025-08-29 15:25:39.750913 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:39.750924 | orchestrator | 2025-08-29 15:25:39.750942 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:42.293953 | orchestrator | Friday 29 August 2025 15:25:39 +0000 (0:00:00.279) 0:00:16.485 ********* 2025-08-29 15:25:42.294150 | orchestrator | 2025-08-29 15:25:42.294183 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:42.294197 | orchestrator | Friday 29 August 2025 15:25:39 +0000 (0:00:00.077) 0:00:16.563 ********* 2025-08-29 15:25:42.294208 | orchestrator | 2025-08-29 15:25:42.294220 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:42.294231 | orchestrator | Friday 29 August 2025 15:25:39 +0000 (0:00:00.079) 0:00:16.642 ********* 2025-08-29 15:25:42.294245 | orchestrator | 2025-08-29 15:25:42.294256 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 15:25:42.294267 | orchestrator | Friday 29 August 2025 15:25:39 +0000 (0:00:00.077) 0:00:16.720 ********* 2025-08-29 15:25:42.294278 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:42.294288 | orchestrator | 2025-08-29 15:25:42.294299 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:25:42.294310 | orchestrator | Friday 29 August 2025 15:25:41 +0000 (0:00:01.609) 0:00:18.330 ********* 2025-08-29 15:25:42.294320 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 15:25:42.294331 | orchestrator |  "msg": [ 2025-08-29 15:25:42.294343 | orchestrator |  "Validator run completed.", 2025-08-29 15:25:42.294354 | orchestrator |  "You can find the report file here:", 2025-08-29 15:25:42.294365 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-08-29T15:25:24+00:00-report.json", 2025-08-29 15:25:42.294376 | orchestrator |  "on the following host:", 2025-08-29 15:25:42.294387 | orchestrator |  "testbed-manager" 2025-08-29 15:25:42.294398 | orchestrator |  ] 2025-08-29 15:25:42.294408 | orchestrator | } 2025-08-29 15:25:42.294419 | orchestrator | 2025-08-29 15:25:42.294430 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:25:42.294442 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 15:25:42.294475 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:42.294488 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:25:42.294498 | orchestrator | 2025-08-29 15:25:42.294511 | orchestrator | 2025-08-29 15:25:42.294523 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:25:42.294535 | orchestrator | Friday 29 August 2025 15:25:41 +0000 (0:00:00.613) 0:00:18.943 ********* 2025-08-29 15:25:42.294547 | orchestrator | =============================================================================== 2025-08-29 15:25:42.294573 | orchestrator | Aggregate test results step one ----------------------------------------- 1.73s 2025-08-29 15:25:42.294585 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2025-08-29 15:25:42.294620 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.57s 2025-08-29 15:25:42.294633 | orchestrator | Gather status data ------------------------------------------------------ 1.23s 2025-08-29 15:25:42.294645 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2025-08-29 15:25:42.294657 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2025-08-29 15:25:42.294667 | orchestrator | Get timestamp for report file ------------------------------------------- 0.74s 2025-08-29 15:25:42.294678 | orchestrator | Aggregate test results step one ----------------------------------------- 0.74s 2025-08-29 15:25:42.294689 | orchestrator | Print report file information ------------------------------------------- 0.61s 2025-08-29 15:25:42.294699 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-08-29 15:25:42.294710 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.40s 2025-08-29 15:25:42.294721 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.38s 2025-08-29 15:25:42.294731 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-08-29 15:25:42.294742 | orchestrator | Set quorum test data ---------------------------------------------------- 0.34s 2025-08-29 15:25:42.294753 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2025-08-29 15:25:42.294763 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2025-08-29 15:25:42.294774 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.32s 2025-08-29 15:25:42.294785 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-08-29 15:25:42.294795 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2025-08-29 15:25:42.294806 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-08-29 15:25:42.637583 | orchestrator | + osism validate ceph-mgrs 2025-08-29 15:26:14.864840 | orchestrator | 2025-08-29 15:26:14.864944 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-08-29 15:26:14.864963 | orchestrator | 2025-08-29 15:26:14.864978 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 15:26:14.864994 | orchestrator | Friday 29 August 2025 15:25:59 +0000 (0:00:00.453) 0:00:00.453 ********* 2025-08-29 15:26:14.865010 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.865025 | orchestrator | 2025-08-29 15:26:14.865040 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 15:26:14.865055 | orchestrator | Friday 29 August 2025 15:25:59 +0000 (0:00:00.693) 0:00:01.147 ********* 2025-08-29 15:26:14.865070 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.865085 | orchestrator | 2025-08-29 15:26:14.865100 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 15:26:14.865137 | orchestrator | Friday 29 August 2025 15:26:00 +0000 (0:00:00.916) 0:00:02.063 ********* 2025-08-29 15:26:14.865150 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.865166 | orchestrator | 2025-08-29 15:26:14.865180 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 15:26:14.865194 | orchestrator | Friday 29 August 2025 15:26:01 +0000 (0:00:00.262) 0:00:02.326 ********* 2025-08-29 15:26:14.865207 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.865221 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:14.865234 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:14.865247 | orchestrator | 2025-08-29 15:26:14.865261 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 15:26:14.865275 | orchestrator | Friday 29 August 2025 15:26:01 +0000 (0:00:00.322) 0:00:02.648 ********* 2025-08-29 15:26:14.865288 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.865302 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:14.865315 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:14.865329 | orchestrator | 2025-08-29 15:26:14.865360 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 15:26:14.865396 | orchestrator | Friday 29 August 2025 15:26:02 +0000 (0:00:00.996) 0:00:03.645 ********* 2025-08-29 15:26:14.865410 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.865423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:26:14.865436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:26:14.865450 | orchestrator | 2025-08-29 15:26:14.865463 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 15:26:14.865477 | orchestrator | Friday 29 August 2025 15:26:02 +0000 (0:00:00.290) 0:00:03.935 ********* 2025-08-29 15:26:14.865490 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.865504 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:14.865518 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:14.865532 | orchestrator | 2025-08-29 15:26:14.865545 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:26:14.865559 | orchestrator | Friday 29 August 2025 15:26:03 +0000 (0:00:00.555) 0:00:04.491 ********* 2025-08-29 15:26:14.865572 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.865586 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:14.865599 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:14.865612 | orchestrator | 2025-08-29 15:26:14.865626 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-08-29 15:26:14.865639 | orchestrator | Friday 29 August 2025 15:26:03 +0000 (0:00:00.316) 0:00:04.807 ********* 2025-08-29 15:26:14.865652 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.865666 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:26:14.865679 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:26:14.865692 | orchestrator | 2025-08-29 15:26:14.865703 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-08-29 15:26:14.865715 | orchestrator | Friday 29 August 2025 15:26:03 +0000 (0:00:00.343) 0:00:05.151 ********* 2025-08-29 15:26:14.865726 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.865737 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:14.865748 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:14.865760 | orchestrator | 2025-08-29 15:26:14.865771 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:26:14.865782 | orchestrator | Friday 29 August 2025 15:26:04 +0000 (0:00:00.332) 0:00:05.483 ********* 2025-08-29 15:26:14.865793 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.865803 | orchestrator | 2025-08-29 15:26:14.865814 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:26:14.865824 | orchestrator | Friday 29 August 2025 15:26:04 +0000 (0:00:00.761) 0:00:06.244 ********* 2025-08-29 15:26:14.865833 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.865844 | orchestrator | 2025-08-29 15:26:14.865854 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:26:14.865863 | orchestrator | Friday 29 August 2025 15:26:05 +0000 (0:00:00.257) 0:00:06.502 ********* 2025-08-29 15:26:14.865873 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.865883 | orchestrator | 2025-08-29 15:26:14.865893 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:14.865903 | orchestrator | Friday 29 August 2025 15:26:05 +0000 (0:00:00.315) 0:00:06.817 ********* 2025-08-29 15:26:14.865914 | orchestrator | 2025-08-29 15:26:14.865924 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:14.865935 | orchestrator | Friday 29 August 2025 15:26:05 +0000 (0:00:00.071) 0:00:06.889 ********* 2025-08-29 15:26:14.865946 | orchestrator | 2025-08-29 15:26:14.865957 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:14.865968 | orchestrator | Friday 29 August 2025 15:26:05 +0000 (0:00:00.070) 0:00:06.960 ********* 2025-08-29 15:26:14.865978 | orchestrator | 2025-08-29 15:26:14.865989 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:26:14.866000 | orchestrator | Friday 29 August 2025 15:26:05 +0000 (0:00:00.071) 0:00:07.032 ********* 2025-08-29 15:26:14.866066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.866082 | orchestrator | 2025-08-29 15:26:14.866095 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 15:26:14.866160 | orchestrator | Friday 29 August 2025 15:26:05 +0000 (0:00:00.259) 0:00:07.292 ********* 2025-08-29 15:26:14.866174 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.866186 | orchestrator | 2025-08-29 15:26:14.866218 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-08-29 15:26:14.866228 | orchestrator | Friday 29 August 2025 15:26:06 +0000 (0:00:00.269) 0:00:07.562 ********* 2025-08-29 15:26:14.866238 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.866249 | orchestrator | 2025-08-29 15:26:14.866259 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-08-29 15:26:14.866269 | orchestrator | Friday 29 August 2025 15:26:06 +0000 (0:00:00.134) 0:00:07.696 ********* 2025-08-29 15:26:14.866280 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:26:14.866290 | orchestrator | 2025-08-29 15:26:14.866301 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-08-29 15:26:14.866311 | orchestrator | Friday 29 August 2025 15:26:08 +0000 (0:00:01.899) 0:00:09.596 ********* 2025-08-29 15:26:14.866322 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.866333 | orchestrator | 2025-08-29 15:26:14.866344 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-08-29 15:26:14.866355 | orchestrator | Friday 29 August 2025 15:26:08 +0000 (0:00:00.283) 0:00:09.880 ********* 2025-08-29 15:26:14.866365 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.866377 | orchestrator | 2025-08-29 15:26:14.866388 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-08-29 15:26:14.866399 | orchestrator | Friday 29 August 2025 15:26:09 +0000 (0:00:00.888) 0:00:10.768 ********* 2025-08-29 15:26:14.866410 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.866422 | orchestrator | 2025-08-29 15:26:14.866433 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-08-29 15:26:14.866445 | orchestrator | Friday 29 August 2025 15:26:09 +0000 (0:00:00.170) 0:00:10.939 ********* 2025-08-29 15:26:14.866456 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:14.866467 | orchestrator | 2025-08-29 15:26:14.866479 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 15:26:14.866490 | orchestrator | Friday 29 August 2025 15:26:09 +0000 (0:00:00.179) 0:00:11.118 ********* 2025-08-29 15:26:14.866502 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.866513 | orchestrator | 2025-08-29 15:26:14.866525 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 15:26:14.866536 | orchestrator | Friday 29 August 2025 15:26:10 +0000 (0:00:00.286) 0:00:11.405 ********* 2025-08-29 15:26:14.866547 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:14.866559 | orchestrator | 2025-08-29 15:26:14.866570 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:26:14.866581 | orchestrator | Friday 29 August 2025 15:26:10 +0000 (0:00:00.258) 0:00:11.664 ********* 2025-08-29 15:26:14.866593 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.866604 | orchestrator | 2025-08-29 15:26:14.866615 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:26:14.866626 | orchestrator | Friday 29 August 2025 15:26:11 +0000 (0:00:01.349) 0:00:13.014 ********* 2025-08-29 15:26:14.866638 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.866649 | orchestrator | 2025-08-29 15:26:14.866660 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:26:14.866671 | orchestrator | Friday 29 August 2025 15:26:11 +0000 (0:00:00.279) 0:00:13.293 ********* 2025-08-29 15:26:14.866683 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.866694 | orchestrator | 2025-08-29 15:26:14.866705 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:14.866728 | orchestrator | Friday 29 August 2025 15:26:12 +0000 (0:00:00.261) 0:00:13.555 ********* 2025-08-29 15:26:14.866739 | orchestrator | 2025-08-29 15:26:14.866751 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:14.866762 | orchestrator | Friday 29 August 2025 15:26:12 +0000 (0:00:00.081) 0:00:13.636 ********* 2025-08-29 15:26:14.866774 | orchestrator | 2025-08-29 15:26:14.866785 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:14.866796 | orchestrator | Friday 29 August 2025 15:26:12 +0000 (0:00:00.067) 0:00:13.704 ********* 2025-08-29 15:26:14.866807 | orchestrator | 2025-08-29 15:26:14.866819 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 15:26:14.866830 | orchestrator | Friday 29 August 2025 15:26:12 +0000 (0:00:00.072) 0:00:13.776 ********* 2025-08-29 15:26:14.866842 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:14.866853 | orchestrator | 2025-08-29 15:26:14.866864 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:26:14.866875 | orchestrator | Friday 29 August 2025 15:26:14 +0000 (0:00:01.939) 0:00:15.715 ********* 2025-08-29 15:26:14.866887 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 15:26:14.866898 | orchestrator |  "msg": [ 2025-08-29 15:26:14.866910 | orchestrator |  "Validator run completed.", 2025-08-29 15:26:14.866921 | orchestrator |  "You can find the report file here:", 2025-08-29 15:26:14.866932 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-08-29T15:25:59+00:00-report.json", 2025-08-29 15:26:14.866944 | orchestrator |  "on the following host:", 2025-08-29 15:26:14.866955 | orchestrator |  "testbed-manager" 2025-08-29 15:26:14.866966 | orchestrator |  ] 2025-08-29 15:26:14.866977 | orchestrator | } 2025-08-29 15:26:14.866989 | orchestrator | 2025-08-29 15:26:14.867000 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:26:14.867012 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:26:14.867025 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:26:14.867046 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:26:15.279066 | orchestrator | 2025-08-29 15:26:15.279202 | orchestrator | 2025-08-29 15:26:15.279219 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:26:15.279232 | orchestrator | Friday 29 August 2025 15:26:14 +0000 (0:00:00.440) 0:00:16.156 ********* 2025-08-29 15:26:15.279244 | orchestrator | =============================================================================== 2025-08-29 15:26:15.279255 | orchestrator | Write report file ------------------------------------------------------- 1.94s 2025-08-29 15:26:15.279266 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.90s 2025-08-29 15:26:15.279277 | orchestrator | Aggregate test results step one ----------------------------------------- 1.35s 2025-08-29 15:26:15.279287 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-08-29 15:26:15.279298 | orchestrator | Create report output directory ------------------------------------------ 0.92s 2025-08-29 15:26:15.279310 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.89s 2025-08-29 15:26:15.279320 | orchestrator | Aggregate test results step one ----------------------------------------- 0.76s 2025-08-29 15:26:15.279331 | orchestrator | Get timestamp for report file ------------------------------------------- 0.69s 2025-08-29 15:26:15.279342 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2025-08-29 15:26:15.279353 | orchestrator | Print report file information ------------------------------------------- 0.44s 2025-08-29 15:26:15.279363 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.34s 2025-08-29 15:26:15.279399 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.33s 2025-08-29 15:26:15.279432 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-08-29 15:26:15.279443 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-08-29 15:26:15.279454 | orchestrator | Aggregate test results step three --------------------------------------- 0.32s 2025-08-29 15:26:15.279465 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-08-29 15:26:15.279475 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2025-08-29 15:26:15.279486 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.28s 2025-08-29 15:26:15.279499 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-08-29 15:26:15.279518 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2025-08-29 15:26:15.666778 | orchestrator | + osism validate ceph-osds 2025-08-29 15:26:37.058981 | orchestrator | 2025-08-29 15:26:37.059137 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-08-29 15:26:37.059191 | orchestrator | 2025-08-29 15:26:37.059205 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 15:26:37.059228 | orchestrator | Friday 29 August 2025 15:26:32 +0000 (0:00:00.501) 0:00:00.501 ********* 2025-08-29 15:26:37.059241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:37.059252 | orchestrator | 2025-08-29 15:26:37.059263 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 15:26:37.059274 | orchestrator | Friday 29 August 2025 15:26:33 +0000 (0:00:00.651) 0:00:01.153 ********* 2025-08-29 15:26:37.059285 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:37.059296 | orchestrator | 2025-08-29 15:26:37.059307 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 15:26:37.059319 | orchestrator | Friday 29 August 2025 15:26:33 +0000 (0:00:00.260) 0:00:01.413 ********* 2025-08-29 15:26:37.059329 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:37.059340 | orchestrator | 2025-08-29 15:26:37.059352 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 15:26:37.059363 | orchestrator | Friday 29 August 2025 15:26:34 +0000 (0:00:01.119) 0:00:02.533 ********* 2025-08-29 15:26:37.059374 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:37.059386 | orchestrator | 2025-08-29 15:26:37.059397 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 15:26:37.059407 | orchestrator | Friday 29 August 2025 15:26:34 +0000 (0:00:00.120) 0:00:02.654 ********* 2025-08-29 15:26:37.059418 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:37.059429 | orchestrator | 2025-08-29 15:26:37.059440 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 15:26:37.059451 | orchestrator | Friday 29 August 2025 15:26:34 +0000 (0:00:00.133) 0:00:02.787 ********* 2025-08-29 15:26:37.059462 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:37.059473 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:37.059484 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:37.059494 | orchestrator | 2025-08-29 15:26:37.059505 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 15:26:37.059516 | orchestrator | Friday 29 August 2025 15:26:35 +0000 (0:00:00.335) 0:00:03.123 ********* 2025-08-29 15:26:37.059526 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:37.059537 | orchestrator | 2025-08-29 15:26:37.059548 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 15:26:37.059559 | orchestrator | Friday 29 August 2025 15:26:35 +0000 (0:00:00.164) 0:00:03.287 ********* 2025-08-29 15:26:37.059570 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:37.059581 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:37.059591 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:37.059628 | orchestrator | 2025-08-29 15:26:37.059640 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-08-29 15:26:37.059651 | orchestrator | Friday 29 August 2025 15:26:35 +0000 (0:00:00.361) 0:00:03.648 ********* 2025-08-29 15:26:37.059662 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:37.059672 | orchestrator | 2025-08-29 15:26:37.059683 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:26:37.059694 | orchestrator | Friday 29 August 2025 15:26:36 +0000 (0:00:00.636) 0:00:04.285 ********* 2025-08-29 15:26:37.059705 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:37.059715 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:37.059726 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:37.059736 | orchestrator | 2025-08-29 15:26:37.059747 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-08-29 15:26:37.059758 | orchestrator | Friday 29 August 2025 15:26:36 +0000 (0:00:00.544) 0:00:04.830 ********* 2025-08-29 15:26:37.059771 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2dbe5cc1a2da7e7a7109799bbfb26e808b64af1867c98547ac8c65ae0cc5f71e', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 15:26:37.059784 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e94631852303b8b8da41c4f62a99a8577068b65dd375148ed18013520bfba3dc', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.059796 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ece4516660c014072a7a43bc2d0e4494abe5ae46df5f45745a76bd9f5a13b52', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.059823 | orchestrator | skipping: [testbed-node-3] => (item={'id': '884885f5ce3dbca08ede3a4587536a0e15c760068e1e5aebf6a088b29ca3c949', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.059850 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f17f624b0ed67fd0567a2805604b5d6a43383438e8b7bb0cd00c89e28ae04f6d', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-08-29 15:26:37.059886 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5bc8b0b8e6feb06b93daafba65b8c0c032d4442b14d30e7f4ab4185ed8a6751f', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 16 minutes (healthy)'})  2025-08-29 15:26:37.059899 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd4a152a0dbebca1f048f2c7563927c4a899844c105e0795707c56d98ab2bf99c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:26:37.059933 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc82c7caa04c2c9584d54feffd66b448bf34072786a61a9ad21055e000b9ce33', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:26:37.059945 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7137d1c847e177cb59aeb5897e96b0e476e4a77efa347eceaea935a419d4f9c7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-08-29 15:26:37.059956 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'de3a240a951086a95f1488f8da23375554672f3c78f5fb7209ed6c3d7df5d4c0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-08-29 15:26:37.059977 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f68b8f914d43b53d7db851c3c7cfd6f3c692d521bb17cdfd6220fb189c3c9c9b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2025-08-29 15:26:37.059989 | orchestrator | skipping: [testbed-node-3] => (item={'id': '358a1798ea24a36488c50c9f5e0f9499fa767f6535fe56ae03c528895b17cc88', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2025-08-29 15:26:37.060002 | orchestrator | ok: [testbed-node-3] => (item={'id': '9c2e8a8c4c5d052a4c533c22dbb0b0aa5bff91de1cd8b69d96b3808aeefe29ff', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-08-29 15:26:37.060013 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c3ff38bcd71057364a803e1969aa9ce3d3c4117d0d2bafc2685c6f8602a79792', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-08-29 15:26:37.060024 | orchestrator | skipping: [testbed-node-3] => (item={'id': '44af087630a71dea3dbaa6f2c192c33c2a9819fd8d5f48a354af6fddf527b220', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 15:26:37.060036 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05e8000bd3f6f97f2e0d947b1b72aaee32744a06bdc31bf6825eb5f14381bd48', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2025-08-29 15:26:37.060047 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c8104ab84a2162dd250851bc1f18b2cd26bb75be5e58513d804945a916faaa8e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 33 minutes (healthy)'})  2025-08-29 15:26:37.060059 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b9bd1022a43322fc17de8aa8204d0041e4f6a424aa541304b5fb4265f6d5a940', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 34 minutes'})  2025-08-29 15:26:37.060075 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e42dfece435314aecacaec84861f3208276158a8541ca8b7089085b5e8779d14', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 34 minutes'})  2025-08-29 15:26:37.060086 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f486bdea0c3f102a05047116beb6c98bdedeb0402544d051938012b4b4517173', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 35 minutes'})  2025-08-29 15:26:37.060105 | orchestrator | skipping: [testbed-node-4] => (item={'id': '31c2277670b8f95f030c86116d07a429583a382a903af6ba3629f034753f1064', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 15:26:37.158141 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b2c9c001ffd7a0ec473ce13ddf2915295e5e9789be6c134d3601b41d511be568', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.158290 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6ed9a7a0fb1be0681a38018872e7233f691f24b9652077926858768fe12c4861', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.158307 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f363e5434ce6f1200a26c82397f0209ac16fc7155720b067d64724ee7f65eac0', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.158340 | orchestrator | skipping: [testbed-node-4] => (item={'id': '51cc1333247eae400f8b852d3f3995704589093de585afe20f71f721460eb734', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-08-29 15:26:37.158352 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6974d78a210b7587dff0b316ce2ccd02c789a25c0534e26f202f2ce0be668a20', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 16 minutes (healthy)'})  2025-08-29 15:26:37.158363 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c1aa2d49bde1e35618b2656eaf44cc8542b47cdc92bc4ff0dc06855e0cc55384', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:26:37.158376 | orchestrator | skipping: [testbed-node-4] => (item={'id': '32e0c3ab17d77c1420bfc39da02f4091fed2b55f1b7bfd87b904af11898a14c8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:26:37.158387 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c954cda40ecbd07485f159c72094334fe58f17197cd38132704d24bad4c0e7a', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-08-29 15:26:37.158398 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcbbbf7609df82573c663dfd4dc8ed078854fb8beb1183e1a440e4e3ba951784', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-08-29 15:26:37.158410 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ddc42b5d6dd2fdc2e85fdb72fb50696a472bd3a63d9fd6e65bd38a9582dc2caa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2025-08-29 15:26:37.158422 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f1b9065003883595aac16527c5747b1bfdca01cf2077a1dbf6b6f7f95fa208bd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2025-08-29 15:26:37.158436 | orchestrator | ok: [testbed-node-4] => (item={'id': '1cd715cb0793eed63cdd43728c86f3b9e145a3ad59f4b794f8d99585e17ac825', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-08-29 15:26:37.158447 | orchestrator | ok: [testbed-node-4] => (item={'id': '1db3cec5a56fa56165e9e1ec6d3d2c49e2f6ceb98301d18ecaeadc7450897d12', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-08-29 15:26:37.158459 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e1f0e74926af8f2dc2e466d7a77177ebad64db6bf79368b49d09f0e336987e6c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 15:26:37.158523 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0e656d401ffccaa9943cb8be7006160ac433de05d0419774a440405cae05e0f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2025-08-29 15:26:37.158538 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1854e0b3a15c644488b7f0fea703ef1e4abd8967ce83800aa10fbb9ecf8a32c4', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 33 minutes (healthy)'})  2025-08-29 15:26:37.158557 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c051e53bd187bdb331c816cfb1d3d336a0bceb88065367767f87ac9cb092a5f0', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 34 minutes'})  2025-08-29 15:26:37.158568 | orchestrator | skipping: [testbed-node-4] => (item={'id': '446ec19833a64438e90aae9b0e7972f568157b76d9d0b7109ae98bd96a1ba6ce', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 34 minutes'})  2025-08-29 15:26:37.158579 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd28cd465813d91c452de46b55849213af3f554c02d8c3c8971dbadd1c0ac6f96', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 15:26:37.158590 | orchestrator | skipping: [testbed-node-4] => (item={'id': '321430d7bfcb29add749e8286f7b3cca217a31a77f6182c72a98ac15891d5601', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 35 minutes'})  2025-08-29 15:26:37.158601 | orchestrator | skipping: [testbed-node-5] => (item={'id': '69f47a0f092d330c35d6ae811eb55eaa46b6dcc1de839b216506b78d1b014578', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.158612 | orchestrator | skipping: [testbed-node-5] => (item={'id': '779226452e32723adf7c8221f608e5ad6fc9e6982d137be172dfe5fcd1124c77', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.158623 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7fb7506165034f83c37b8e2330fba8a6724e0ade0d700b81b8b84fed49ee1fcf', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:26:37.158634 | orchestrator | skipping: [testbed-node-5] => (item={'id': '34ac1f3602d65281ee06365b7151dbcf23ae08de1620bee760bffa3ae040df5a', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-08-29 15:26:37.158647 | orchestrator | skipping: [testbed-node-5] => (item={'id': '400ebd13c4505e993839d21f31319549c468fbd3e5976aed44d09012119a31de', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 16 minutes (healthy)'})  2025-08-29 15:26:37.158659 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8735f9e3a82e90bc81e94474f1f9a8bc4d5c2cb9574936834968a859165dd2c1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:26:37.158676 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8e571379909764ca270250e20c2148bc6fbe2dbd61c2c0ffa73bd0173d03d756', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:26:37.158690 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0516031387cc23b98ae62b28a8a7f52a42664f3f8d72e2e88d1448f1b5869753', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-08-29 15:26:37.158703 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a048864b80392cd62870da505186cde9021afaf19396efd4cfa86f07c2d98d7c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-08-29 15:26:37.158729 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85a37b512df83e422a05ab606f43d4410784e6a029ac0574b7a6ee28d24b01ff', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2025-08-29 15:26:45.295067 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5d9f5fe58712c956951a8c8d0914a31bda945abe96c9230695ffbd8e1a60e53e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2025-08-29 15:26:45.295257 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f6e4a1e4aaf47dfa2f4ee2ec4f6fac1ec312153061581365e5bad13a22f90359', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-08-29 15:26:45.296725 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c0ad32a16c079bcf2c89ebfa5ea288ea07450f6a6a7cd84a6138f5e635213634', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-08-29 15:26:45.296755 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0e590653a0d051a21e6845e47fed3241a38f23ce4611ab20979ba17843feb05b', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 15:26:45.296771 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df1facc20552b62e1c8aa750a351a18871feffcf9135fe1220c2c5bc6dd5a19f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2025-08-29 15:26:45.296788 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03aa3807853ef408475c3ed90c794eb610486016369f59e3d0c9ce2ea7daaba8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 33 minutes (healthy)'})  2025-08-29 15:26:45.296803 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4f83c1dad4d0064e9e126e75afbae1155932b9e74648a2f6c30cbfcab4f06436', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 34 minutes'})  2025-08-29 15:26:45.296817 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac3900efb2095fddb2700bc69278d376a2ff3aa8b704d632b8310ca533cb8fa0', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 34 minutes'})  2025-08-29 15:26:45.296828 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'afa952690f619a0b45be470cf136232062fdac69fa8c4667021a8df43d2fffa2', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 35 minutes'})  2025-08-29 15:26:45.296837 | orchestrator | 2025-08-29 15:26:45.296847 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-08-29 15:26:45.296857 | orchestrator | Friday 29 August 2025 15:26:37 +0000 (0:00:00.536) 0:00:05.366 ********* 2025-08-29 15:26:45.296865 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.296874 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:45.296882 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:45.296890 | orchestrator | 2025-08-29 15:26:45.296898 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-08-29 15:26:45.296906 | orchestrator | Friday 29 August 2025 15:26:37 +0000 (0:00:00.318) 0:00:05.685 ********* 2025-08-29 15:26:45.296914 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.296923 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:45.296931 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:45.296938 | orchestrator | 2025-08-29 15:26:45.296946 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-08-29 15:26:45.296954 | orchestrator | Friday 29 August 2025 15:26:37 +0000 (0:00:00.306) 0:00:05.991 ********* 2025-08-29 15:26:45.297002 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297011 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:45.297032 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:45.297040 | orchestrator | 2025-08-29 15:26:45.297048 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:26:45.297056 | orchestrator | Friday 29 August 2025 15:26:38 +0000 (0:00:00.609) 0:00:06.601 ********* 2025-08-29 15:26:45.297063 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297071 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:45.297079 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:45.297086 | orchestrator | 2025-08-29 15:26:45.297094 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-08-29 15:26:45.297102 | orchestrator | Friday 29 August 2025 15:26:38 +0000 (0:00:00.332) 0:00:06.933 ********* 2025-08-29 15:26:45.297110 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-08-29 15:26:45.297130 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-08-29 15:26:45.297138 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297146 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-08-29 15:26:45.297154 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-08-29 15:26:45.297240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:45.297251 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-08-29 15:26:45.297259 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-08-29 15:26:45.297267 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:45.297275 | orchestrator | 2025-08-29 15:26:45.297283 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-08-29 15:26:45.297291 | orchestrator | Friday 29 August 2025 15:26:39 +0000 (0:00:00.335) 0:00:07.268 ********* 2025-08-29 15:26:45.297298 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297305 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:45.297312 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:45.297319 | orchestrator | 2025-08-29 15:26:45.297325 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 15:26:45.297332 | orchestrator | Friday 29 August 2025 15:26:39 +0000 (0:00:00.346) 0:00:07.615 ********* 2025-08-29 15:26:45.297338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297345 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:45.297351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:45.297358 | orchestrator | 2025-08-29 15:26:45.297364 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 15:26:45.297371 | orchestrator | Friday 29 August 2025 15:26:40 +0000 (0:00:00.578) 0:00:08.194 ********* 2025-08-29 15:26:45.297377 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297384 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:45.297390 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:45.297397 | orchestrator | 2025-08-29 15:26:45.297403 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-08-29 15:26:45.297410 | orchestrator | Friday 29 August 2025 15:26:40 +0000 (0:00:00.330) 0:00:08.525 ********* 2025-08-29 15:26:45.297416 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297423 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:45.297430 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:45.297436 | orchestrator | 2025-08-29 15:26:45.297443 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:26:45.297449 | orchestrator | Friday 29 August 2025 15:26:40 +0000 (0:00:00.358) 0:00:08.883 ********* 2025-08-29 15:26:45.297456 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297462 | orchestrator | 2025-08-29 15:26:45.297469 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:26:45.297482 | orchestrator | Friday 29 August 2025 15:26:41 +0000 (0:00:00.256) 0:00:09.140 ********* 2025-08-29 15:26:45.297489 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297495 | orchestrator | 2025-08-29 15:26:45.297502 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:26:45.297508 | orchestrator | Friday 29 August 2025 15:26:41 +0000 (0:00:00.252) 0:00:09.392 ********* 2025-08-29 15:26:45.297515 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297522 | orchestrator | 2025-08-29 15:26:45.297529 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:45.297536 | orchestrator | Friday 29 August 2025 15:26:41 +0000 (0:00:00.241) 0:00:09.633 ********* 2025-08-29 15:26:45.297543 | orchestrator | 2025-08-29 15:26:45.297549 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:45.297556 | orchestrator | Friday 29 August 2025 15:26:41 +0000 (0:00:00.067) 0:00:09.701 ********* 2025-08-29 15:26:45.297563 | orchestrator | 2025-08-29 15:26:45.297570 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:45.297576 | orchestrator | Friday 29 August 2025 15:26:41 +0000 (0:00:00.063) 0:00:09.764 ********* 2025-08-29 15:26:45.297583 | orchestrator | 2025-08-29 15:26:45.297590 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:26:45.297596 | orchestrator | Friday 29 August 2025 15:26:41 +0000 (0:00:00.304) 0:00:10.068 ********* 2025-08-29 15:26:45.297603 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297610 | orchestrator | 2025-08-29 15:26:45.297616 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-08-29 15:26:45.297623 | orchestrator | Friday 29 August 2025 15:26:42 +0000 (0:00:00.273) 0:00:10.341 ********* 2025-08-29 15:26:45.297629 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:45.297636 | orchestrator | 2025-08-29 15:26:45.297643 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:26:45.297649 | orchestrator | Friday 29 August 2025 15:26:42 +0000 (0:00:00.250) 0:00:10.592 ********* 2025-08-29 15:26:45.297656 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297662 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:45.297669 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:45.297675 | orchestrator | 2025-08-29 15:26:45.297682 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-08-29 15:26:45.297689 | orchestrator | Friday 29 August 2025 15:26:42 +0000 (0:00:00.362) 0:00:10.955 ********* 2025-08-29 15:26:45.297696 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297702 | orchestrator | 2025-08-29 15:26:45.297709 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-08-29 15:26:45.297716 | orchestrator | Friday 29 August 2025 15:26:43 +0000 (0:00:00.237) 0:00:11.192 ********* 2025-08-29 15:26:45.297722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:26:45.297729 | orchestrator | 2025-08-29 15:26:45.297736 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-08-29 15:26:45.297742 | orchestrator | Friday 29 August 2025 15:26:44 +0000 (0:00:01.603) 0:00:12.795 ********* 2025-08-29 15:26:45.297749 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297755 | orchestrator | 2025-08-29 15:26:45.297763 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-08-29 15:26:45.297774 | orchestrator | Friday 29 August 2025 15:26:44 +0000 (0:00:00.145) 0:00:12.941 ********* 2025-08-29 15:26:45.297785 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:45.297796 | orchestrator | 2025-08-29 15:26:45.297808 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-08-29 15:26:45.297819 | orchestrator | Friday 29 August 2025 15:26:45 +0000 (0:00:00.310) 0:00:13.251 ********* 2025-08-29 15:26:45.297837 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:59.196332 | orchestrator | 2025-08-29 15:26:59.196469 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-08-29 15:26:59.196487 | orchestrator | Friday 29 August 2025 15:26:45 +0000 (0:00:00.114) 0:00:13.366 ********* 2025-08-29 15:26:59.196523 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.196536 | orchestrator | 2025-08-29 15:26:59.196548 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:26:59.196559 | orchestrator | Friday 29 August 2025 15:26:45 +0000 (0:00:00.133) 0:00:13.500 ********* 2025-08-29 15:26:59.196569 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.196580 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.196591 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.196602 | orchestrator | 2025-08-29 15:26:59.196613 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-08-29 15:26:59.196624 | orchestrator | Friday 29 August 2025 15:26:46 +0000 (0:00:00.613) 0:00:14.114 ********* 2025-08-29 15:26:59.196635 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:26:59.196647 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:26:59.196658 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:26:59.196669 | orchestrator | 2025-08-29 15:26:59.196680 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-08-29 15:26:59.196691 | orchestrator | Friday 29 August 2025 15:26:48 +0000 (0:00:02.349) 0:00:16.463 ********* 2025-08-29 15:26:59.196702 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.196713 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.196723 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.196734 | orchestrator | 2025-08-29 15:26:59.196745 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-08-29 15:26:59.196757 | orchestrator | Friday 29 August 2025 15:26:48 +0000 (0:00:00.313) 0:00:16.777 ********* 2025-08-29 15:26:59.196769 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.196781 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.196794 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.196806 | orchestrator | 2025-08-29 15:26:59.196819 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-08-29 15:26:59.196831 | orchestrator | Friday 29 August 2025 15:26:49 +0000 (0:00:00.554) 0:00:17.331 ********* 2025-08-29 15:26:59.196844 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:59.196856 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:59.196868 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:59.196881 | orchestrator | 2025-08-29 15:26:59.196893 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-08-29 15:26:59.196905 | orchestrator | Friday 29 August 2025 15:26:49 +0000 (0:00:00.579) 0:00:17.910 ********* 2025-08-29 15:26:59.196918 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.196929 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.196942 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.196955 | orchestrator | 2025-08-29 15:26:59.196968 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-08-29 15:26:59.196980 | orchestrator | Friday 29 August 2025 15:26:50 +0000 (0:00:00.343) 0:00:18.254 ********* 2025-08-29 15:26:59.196992 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:59.197004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:59.197016 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:59.197029 | orchestrator | 2025-08-29 15:26:59.197040 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-08-29 15:26:59.197102 | orchestrator | Friday 29 August 2025 15:26:50 +0000 (0:00:00.329) 0:00:18.583 ********* 2025-08-29 15:26:59.197116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:59.197128 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:59.197139 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:59.197150 | orchestrator | 2025-08-29 15:26:59.197161 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:26:59.197172 | orchestrator | Friday 29 August 2025 15:26:50 +0000 (0:00:00.300) 0:00:18.883 ********* 2025-08-29 15:26:59.197183 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.197214 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.197226 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.197245 | orchestrator | 2025-08-29 15:26:59.197256 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-08-29 15:26:59.197267 | orchestrator | Friday 29 August 2025 15:26:51 +0000 (0:00:00.796) 0:00:19.680 ********* 2025-08-29 15:26:59.197277 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.197288 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.197298 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.197309 | orchestrator | 2025-08-29 15:26:59.197320 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-08-29 15:26:59.197331 | orchestrator | Friday 29 August 2025 15:26:52 +0000 (0:00:00.544) 0:00:20.224 ********* 2025-08-29 15:26:59.197342 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.197352 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.197363 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.197374 | orchestrator | 2025-08-29 15:26:59.197390 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-08-29 15:26:59.197401 | orchestrator | Friday 29 August 2025 15:26:52 +0000 (0:00:00.331) 0:00:20.556 ********* 2025-08-29 15:26:59.197412 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:59.197422 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:26:59.197433 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:26:59.197444 | orchestrator | 2025-08-29 15:26:59.197455 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-08-29 15:26:59.197466 | orchestrator | Friday 29 August 2025 15:26:52 +0000 (0:00:00.339) 0:00:20.895 ********* 2025-08-29 15:26:59.197477 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:26:59.197487 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:26:59.197498 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:26:59.197509 | orchestrator | 2025-08-29 15:26:59.197519 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 15:26:59.197530 | orchestrator | Friday 29 August 2025 15:26:53 +0000 (0:00:00.599) 0:00:21.495 ********* 2025-08-29 15:26:59.197541 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:59.197552 | orchestrator | 2025-08-29 15:26:59.197563 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 15:26:59.197574 | orchestrator | Friday 29 August 2025 15:26:53 +0000 (0:00:00.262) 0:00:21.758 ********* 2025-08-29 15:26:59.197585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:26:59.197596 | orchestrator | 2025-08-29 15:26:59.197625 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:26:59.197636 | orchestrator | Friday 29 August 2025 15:26:53 +0000 (0:00:00.262) 0:00:22.020 ********* 2025-08-29 15:26:59.197647 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:59.197658 | orchestrator | 2025-08-29 15:26:59.197669 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:26:59.197680 | orchestrator | Friday 29 August 2025 15:26:55 +0000 (0:00:01.777) 0:00:23.797 ********* 2025-08-29 15:26:59.197690 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:59.197701 | orchestrator | 2025-08-29 15:26:59.197712 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:26:59.197723 | orchestrator | Friday 29 August 2025 15:26:56 +0000 (0:00:00.295) 0:00:24.093 ********* 2025-08-29 15:26:59.197734 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:59.197744 | orchestrator | 2025-08-29 15:26:59.197755 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:59.197766 | orchestrator | Friday 29 August 2025 15:26:56 +0000 (0:00:00.272) 0:00:24.366 ********* 2025-08-29 15:26:59.197776 | orchestrator | 2025-08-29 15:26:59.197787 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:59.197798 | orchestrator | Friday 29 August 2025 15:26:56 +0000 (0:00:00.070) 0:00:24.436 ********* 2025-08-29 15:26:59.197808 | orchestrator | 2025-08-29 15:26:59.197819 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:26:59.197838 | orchestrator | Friday 29 August 2025 15:26:56 +0000 (0:00:00.070) 0:00:24.506 ********* 2025-08-29 15:26:59.197849 | orchestrator | 2025-08-29 15:26:59.197859 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 15:26:59.197870 | orchestrator | Friday 29 August 2025 15:26:56 +0000 (0:00:00.071) 0:00:24.578 ********* 2025-08-29 15:26:59.197881 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:26:59.197891 | orchestrator | 2025-08-29 15:26:59.197903 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:26:59.197913 | orchestrator | Friday 29 August 2025 15:26:58 +0000 (0:00:01.715) 0:00:26.294 ********* 2025-08-29 15:26:59.197924 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-08-29 15:26:59.197934 | orchestrator |  "msg": [ 2025-08-29 15:26:59.197945 | orchestrator |  "Validator run completed.", 2025-08-29 15:26:59.197956 | orchestrator |  "You can find the report file here:", 2025-08-29 15:26:59.197967 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-08-29T15:26:32+00:00-report.json", 2025-08-29 15:26:59.197978 | orchestrator |  "on the following host:", 2025-08-29 15:26:59.197989 | orchestrator |  "testbed-manager" 2025-08-29 15:26:59.198000 | orchestrator |  ] 2025-08-29 15:26:59.198011 | orchestrator | } 2025-08-29 15:26:59.198075 | orchestrator | 2025-08-29 15:26:59.198087 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:26:59.198114 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-08-29 15:26:59.199554 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:26:59.199573 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:26:59.199584 | orchestrator | 2025-08-29 15:26:59.199596 | orchestrator | 2025-08-29 15:26:59.199608 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:26:59.199619 | orchestrator | Friday 29 August 2025 15:26:59 +0000 (0:00:00.950) 0:00:27.244 ********* 2025-08-29 15:26:59.199630 | orchestrator | =============================================================================== 2025-08-29 15:26:59.199641 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.35s 2025-08-29 15:26:59.199652 | orchestrator | Aggregate test results step one ----------------------------------------- 1.78s 2025-08-29 15:26:59.199662 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2025-08-29 15:26:59.199673 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2025-08-29 15:26:59.199693 | orchestrator | Create report output directory ------------------------------------------ 1.12s 2025-08-29 15:26:59.199705 | orchestrator | Print report file information ------------------------------------------- 0.95s 2025-08-29 15:26:59.199716 | orchestrator | Prepare test data ------------------------------------------------------- 0.80s 2025-08-29 15:26:59.199726 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-08-29 15:26:59.199737 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.64s 2025-08-29 15:26:59.199748 | orchestrator | Prepare test data ------------------------------------------------------- 0.61s 2025-08-29 15:26:59.199759 | orchestrator | Set test result to passed if count matches ------------------------------ 0.61s 2025-08-29 15:26:59.199769 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.60s 2025-08-29 15:26:59.199780 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.58s 2025-08-29 15:26:59.199791 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.58s 2025-08-29 15:26:59.199802 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.55s 2025-08-29 15:26:59.199826 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.54s 2025-08-29 15:26:59.199853 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-08-29 15:26:59.581387 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2025-08-29 15:26:59.581477 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2025-08-29 15:26:59.581489 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2025-08-29 15:26:59.980356 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-08-29 15:26:59.991100 | orchestrator | + set -e 2025-08-29 15:26:59.991176 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 15:26:59.991230 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 15:26:59.991243 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 15:26:59.991254 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 15:26:59.991265 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 15:26:59.991276 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 15:26:59.991288 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 15:26:59.991299 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:26:59.991310 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:26:59.991321 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 15:26:59.991331 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 15:26:59.991342 | orchestrator | ++ export ARA=false 2025-08-29 15:26:59.991353 | orchestrator | ++ ARA=false 2025-08-29 15:26:59.991364 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 15:26:59.991374 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 15:26:59.991385 | orchestrator | ++ export TEMPEST=false 2025-08-29 15:26:59.991395 | orchestrator | ++ TEMPEST=false 2025-08-29 15:26:59.991406 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 15:26:59.991417 | orchestrator | ++ IS_ZUUL=true 2025-08-29 15:26:59.991428 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 15:26:59.991439 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-08-29 15:26:59.991449 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 15:26:59.991460 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 15:26:59.991470 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 15:26:59.991481 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 15:26:59.991491 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 15:26:59.991502 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 15:26:59.991512 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 15:26:59.991523 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 15:26:59.991533 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 15:26:59.991544 | orchestrator | + source /etc/os-release 2025-08-29 15:26:59.991982 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-08-29 15:26:59.992006 | orchestrator | ++ NAME=Ubuntu 2025-08-29 15:26:59.992018 | orchestrator | ++ VERSION_ID=24.04 2025-08-29 15:26:59.992031 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-08-29 15:26:59.992043 | orchestrator | ++ VERSION_CODENAME=noble 2025-08-29 15:26:59.992056 | orchestrator | ++ ID=ubuntu 2025-08-29 15:26:59.992068 | orchestrator | ++ ID_LIKE=debian 2025-08-29 15:26:59.992078 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-08-29 15:26:59.992089 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-08-29 15:26:59.992100 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-08-29 15:26:59.992111 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-08-29 15:26:59.992146 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-08-29 15:26:59.992158 | orchestrator | ++ LOGO=ubuntu-logo 2025-08-29 15:26:59.992168 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-08-29 15:26:59.992181 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-08-29 15:26:59.992224 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 15:27:00.027154 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 15:27:25.506947 | orchestrator | 2025-08-29 15:27:25.507062 | orchestrator | # Status of Elasticsearch 2025-08-29 15:27:25.507071 | orchestrator | 2025-08-29 15:27:25.507077 | orchestrator | + pushd /opt/configuration/contrib 2025-08-29 15:27:25.507090 | orchestrator | + echo 2025-08-29 15:27:25.507096 | orchestrator | + echo '# Status of Elasticsearch' 2025-08-29 15:27:25.507101 | orchestrator | + echo 2025-08-29 15:27:25.507106 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-08-29 15:27:25.728050 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-08-29 15:27:25.729538 | orchestrator | 2025-08-29 15:27:25.729560 | orchestrator | # Status of MariaDB 2025-08-29 15:27:25.729573 | orchestrator | 2025-08-29 15:27:25.729585 | orchestrator | + echo 2025-08-29 15:27:25.729596 | orchestrator | + echo '# Status of MariaDB' 2025-08-29 15:27:25.729607 | orchestrator | + echo 2025-08-29 15:27:25.729618 | orchestrator | + MARIADB_USER=root_shard_0 2025-08-29 15:27:25.729630 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-08-29 15:27:25.806112 | orchestrator | Reading package lists... 2025-08-29 15:27:26.231397 | orchestrator | Building dependency tree... 2025-08-29 15:27:26.232022 | orchestrator | Reading state information... 2025-08-29 15:27:26.700417 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-08-29 15:27:26.700515 | orchestrator | bc set to manually installed. 2025-08-29 15:27:26.700532 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-08-29 15:27:27.365311 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-08-29 15:27:27.365402 | orchestrator | 2025-08-29 15:27:27.365417 | orchestrator | # Status of Prometheus 2025-08-29 15:27:27.365429 | orchestrator | 2025-08-29 15:27:27.365441 | orchestrator | + echo 2025-08-29 15:27:27.365452 | orchestrator | + echo '# Status of Prometheus' 2025-08-29 15:27:27.365463 | orchestrator | + echo 2025-08-29 15:27:27.365474 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-08-29 15:27:27.418562 | orchestrator | Unauthorized 2025-08-29 15:27:27.422191 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-08-29 15:27:27.481525 | orchestrator | Unauthorized 2025-08-29 15:27:27.486430 | orchestrator | 2025-08-29 15:27:27.486492 | orchestrator | # Status of RabbitMQ 2025-08-29 15:27:27.486506 | orchestrator | 2025-08-29 15:27:27.486517 | orchestrator | + echo 2025-08-29 15:27:27.486528 | orchestrator | + echo '# Status of RabbitMQ' 2025-08-29 15:27:27.486539 | orchestrator | + echo 2025-08-29 15:27:27.486551 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-08-29 15:27:27.985844 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-08-29 15:27:27.996422 | orchestrator | 2025-08-29 15:27:27.996494 | orchestrator | # Status of Redis 2025-08-29 15:27:27.996508 | orchestrator | 2025-08-29 15:27:27.996520 | orchestrator | + echo 2025-08-29 15:27:27.996531 | orchestrator | + echo '# Status of Redis' 2025-08-29 15:27:27.996543 | orchestrator | + echo 2025-08-29 15:27:27.996595 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-08-29 15:27:28.000997 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001529s;;;0.000000;10.000000 2025-08-29 15:27:28.001292 | orchestrator | + popd 2025-08-29 15:27:28.001492 | orchestrator | 2025-08-29 15:27:28.001538 | orchestrator | # Create backup of MariaDB database 2025-08-29 15:27:28.001551 | orchestrator | 2025-08-29 15:27:28.001563 | orchestrator | + echo 2025-08-29 15:27:28.001575 | orchestrator | + echo '# Create backup of MariaDB database' 2025-08-29 15:27:28.001586 | orchestrator | + echo 2025-08-29 15:27:28.001598 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-08-29 15:27:30.061729 | orchestrator | 2025-08-29 15:27:30 | INFO  | Task f3ede7a0-5750-4811-98ef-20fe0b075235 (mariadb_backup) was prepared for execution. 2025-08-29 15:27:30.061814 | orchestrator | 2025-08-29 15:27:30 | INFO  | It takes a moment until task f3ede7a0-5750-4811-98ef-20fe0b075235 (mariadb_backup) has been started and output is visible here. 2025-08-29 15:29:02.589795 | orchestrator | 2025-08-29 15:29:02.589923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:29:02.589941 | orchestrator | 2025-08-29 15:29:02.589961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:29:02.589985 | orchestrator | Friday 29 August 2025 15:27:34 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-08-29 15:29:02.590112 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:29:02.590139 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:29:02.590160 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:29:02.590178 | orchestrator | 2025-08-29 15:29:02.590196 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:29:02.590208 | orchestrator | Friday 29 August 2025 15:27:34 +0000 (0:00:00.358) 0:00:00.561 ********* 2025-08-29 15:29:02.590219 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 15:29:02.590230 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 15:29:02.590240 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 15:29:02.590251 | orchestrator | 2025-08-29 15:29:02.590261 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 15:29:02.590272 | orchestrator | 2025-08-29 15:29:02.590283 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 15:29:02.590294 | orchestrator | Friday 29 August 2025 15:27:35 +0000 (0:00:00.700) 0:00:01.261 ********* 2025-08-29 15:29:02.590305 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:29:02.590318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:29:02.590334 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:29:02.590353 | orchestrator | 2025-08-29 15:29:02.590368 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:29:02.590396 | orchestrator | Friday 29 August 2025 15:27:35 +0000 (0:00:00.470) 0:00:01.732 ********* 2025-08-29 15:29:02.590448 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:29:02.590471 | orchestrator | 2025-08-29 15:29:02.590492 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-08-29 15:29:02.590510 | orchestrator | Friday 29 August 2025 15:27:36 +0000 (0:00:00.576) 0:00:02.308 ********* 2025-08-29 15:29:02.590527 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:29:02.590538 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:29:02.590549 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:29:02.590559 | orchestrator | 2025-08-29 15:29:02.590570 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-08-29 15:29:02.590581 | orchestrator | Friday 29 August 2025 15:27:39 +0000 (0:00:03.299) 0:00:05.607 ********* 2025-08-29 15:29:02.590591 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 15:29:02.590603 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-08-29 15:29:02.590620 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 15:29:02.590639 | orchestrator | mariadb_bootstrap_restart 2025-08-29 15:29:02.590658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:29:02.590689 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:29:02.590706 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:29:02.590724 | orchestrator | 2025-08-29 15:29:02.590742 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 15:29:02.590759 | orchestrator | skipping: no hosts matched 2025-08-29 15:29:02.590776 | orchestrator | 2025-08-29 15:29:02.590793 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:29:02.590811 | orchestrator | skipping: no hosts matched 2025-08-29 15:29:02.590900 | orchestrator | 2025-08-29 15:29:02.590921 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 15:29:02.590940 | orchestrator | skipping: no hosts matched 2025-08-29 15:29:02.590959 | orchestrator | 2025-08-29 15:29:02.590976 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 15:29:02.590987 | orchestrator | 2025-08-29 15:29:02.590998 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 15:29:02.591008 | orchestrator | Friday 29 August 2025 15:29:01 +0000 (0:01:21.726) 0:01:27.334 ********* 2025-08-29 15:29:02.591019 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:29:02.591044 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:29:02.591056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:29:02.591066 | orchestrator | 2025-08-29 15:29:02.591077 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 15:29:02.591088 | orchestrator | Friday 29 August 2025 15:29:01 +0000 (0:00:00.317) 0:01:27.651 ********* 2025-08-29 15:29:02.591098 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:29:02.591109 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:29:02.591120 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:29:02.591130 | orchestrator | 2025-08-29 15:29:02.591141 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:29:02.591153 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:29:02.591165 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:29:02.591176 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:29:02.591187 | orchestrator | 2025-08-29 15:29:02.591198 | orchestrator | 2025-08-29 15:29:02.591208 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:29:02.591219 | orchestrator | Friday 29 August 2025 15:29:02 +0000 (0:00:00.473) 0:01:28.124 ********* 2025-08-29 15:29:02.591230 | orchestrator | =============================================================================== 2025-08-29 15:29:02.591241 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 81.73s 2025-08-29 15:29:02.591275 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.30s 2025-08-29 15:29:02.591286 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-08-29 15:29:02.591297 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-08-29 15:29:02.591308 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.47s 2025-08-29 15:29:02.591319 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2025-08-29 15:29:02.591330 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-08-29 15:29:02.591340 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2025-08-29 15:29:02.967566 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-08-29 15:29:02.973275 | orchestrator | + set -e 2025-08-29 15:29:02.973321 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 15:29:02.973336 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 15:29:02.973347 | orchestrator | ++ INTERACTIVE=false 2025-08-29 15:29:02.973358 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 15:29:02.973369 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 15:29:02.973380 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 15:29:02.973751 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 15:29:02.977585 | orchestrator | 2025-08-29 15:29:02.977666 | orchestrator | # OpenStack endpoints 2025-08-29 15:29:02.977682 | orchestrator | 2025-08-29 15:29:02.977693 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:29:02.977705 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:29:02.977716 | orchestrator | + export OS_CLOUD=admin 2025-08-29 15:29:02.977726 | orchestrator | + OS_CLOUD=admin 2025-08-29 15:29:02.977738 | orchestrator | + echo 2025-08-29 15:29:02.977749 | orchestrator | + echo '# OpenStack endpoints' 2025-08-29 15:29:02.977759 | orchestrator | + echo 2025-08-29 15:29:02.977770 | orchestrator | + openstack endpoint list 2025-08-29 15:29:06.435384 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 15:29:06.435512 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-08-29 15:29:06.435564 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 15:29:06.435576 | orchestrator | | 09f68cd863b1459e9626147d8951760c | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-08-29 15:29:06.435587 | orchestrator | | 0c22856cafe141188aa0a593b4da98c2 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 15:29:06.435616 | orchestrator | | 0f6b10c1829d43dcac798f341ad75bd7 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-08-29 15:29:06.435627 | orchestrator | | 53afe3b8851f4c31a4e7e045390da977 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-08-29 15:29:06.435639 | orchestrator | | 5543c16a280a4ac2ac279f01ec7eecc5 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-08-29 15:29:06.435650 | orchestrator | | 583f5c4954e54f398ba6863a4d43cb58 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-08-29 15:29:06.435660 | orchestrator | | 711ce3f66e2f49e5a35b538bf6a1ef6a | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-08-29 15:29:06.435676 | orchestrator | | 8336f9e6150b4109a5972ea500288759 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-08-29 15:29:06.435687 | orchestrator | | 92b82bf8f9f545e688d054810993ba85 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 15:29:06.435698 | orchestrator | | 96faaf94408c4f819d9219959739f4a4 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-08-29 15:29:06.435715 | orchestrator | | 9e4589c2c12943029c20959ed355e30c | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-08-29 15:29:06.435734 | orchestrator | | ade3037b52344666904919a9fc4d738b | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-08-29 15:29:06.435752 | orchestrator | | b3438390b6bf471daf328c7a94178818 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-08-29 15:29:06.435770 | orchestrator | | b8976cbf9b6349b6868dc0fc79cf4a40 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-08-29 15:29:06.435787 | orchestrator | | bf95a14e0b664f59bf0f805f3a2694b5 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-08-29 15:29:06.435806 | orchestrator | | c234026bc24843bc906e1a547acc2569 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 15:29:06.435824 | orchestrator | | ddf0783a7c164aa69069edb8263c7ca9 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 15:29:06.435844 | orchestrator | | e884e1f2b25046a19b59c6ab0ffe4e2e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-08-29 15:29:06.435863 | orchestrator | | eb62d12d09a44b5bb76f2b7b56a22e65 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-08-29 15:29:06.435886 | orchestrator | | ee354ad4c130472f9cc914b7f9df8882 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-08-29 15:29:06.435916 | orchestrator | | f432f2d2effd4ee2976626c5732bf3f8 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-08-29 15:29:06.435927 | orchestrator | | f45c1868a5bf421a9ca3cea81306e2e1 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-08-29 15:29:06.435941 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 15:29:06.738754 | orchestrator | 2025-08-29 15:29:06.738842 | orchestrator | # Cinder 2025-08-29 15:29:06.738855 | orchestrator | 2025-08-29 15:29:06.738865 | orchestrator | + echo 2025-08-29 15:29:06.738875 | orchestrator | + echo '# Cinder' 2025-08-29 15:29:06.738885 | orchestrator | + echo 2025-08-29 15:29:06.738895 | orchestrator | + openstack volume service list 2025-08-29 15:29:10.090763 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 15:29:10.090869 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 15:29:10.090882 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 15:29:10.090892 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T15:29:01.000000 | 2025-08-29 15:29:10.090902 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T15:29:03.000000 | 2025-08-29 15:29:10.090912 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T15:29:05.000000 | 2025-08-29 15:29:10.090922 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-08-29T15:29:09.000000 | 2025-08-29 15:29:10.090931 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-08-29T15:29:00.000000 | 2025-08-29 15:29:10.090941 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-08-29T15:29:01.000000 | 2025-08-29 15:29:10.090950 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-08-29T15:29:01.000000 | 2025-08-29 15:29:10.090960 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-08-29T15:29:01.000000 | 2025-08-29 15:29:10.090969 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-08-29T15:29:01.000000 | 2025-08-29 15:29:10.090997 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 15:29:10.541096 | orchestrator | 2025-08-29 15:29:10.541178 | orchestrator | # Neutron 2025-08-29 15:29:10.541188 | orchestrator | 2025-08-29 15:29:10.541196 | orchestrator | + echo 2025-08-29 15:29:10.541204 | orchestrator | + echo '# Neutron' 2025-08-29 15:29:10.541212 | orchestrator | + echo 2025-08-29 15:29:10.541220 | orchestrator | + openstack network agent list 2025-08-29 15:29:13.559235 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 15:29:13.559361 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-08-29 15:29:13.560068 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 15:29:13.560099 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-08-29 15:29:13.560111 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-08-29 15:29:13.560138 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-08-29 15:29:13.560190 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-08-29 15:29:13.560202 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-08-29 15:29:13.560213 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-08-29 15:29:13.560223 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 15:29:13.560234 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 15:29:13.560245 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 15:29:13.560255 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 15:29:13.919852 | orchestrator | + openstack network service provider list 2025-08-29 15:29:16.681002 | orchestrator | +---------------+------+---------+ 2025-08-29 15:29:16.681101 | orchestrator | | Service Type | Name | Default | 2025-08-29 15:29:16.681117 | orchestrator | +---------------+------+---------+ 2025-08-29 15:29:16.681128 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-08-29 15:29:16.681139 | orchestrator | +---------------+------+---------+ 2025-08-29 15:29:17.033193 | orchestrator | 2025-08-29 15:29:17.033291 | orchestrator | # Nova 2025-08-29 15:29:17.033306 | orchestrator | 2025-08-29 15:29:17.033319 | orchestrator | + echo 2025-08-29 15:29:17.033330 | orchestrator | + echo '# Nova' 2025-08-29 15:29:17.033342 | orchestrator | + echo 2025-08-29 15:29:17.033354 | orchestrator | + openstack compute service list 2025-08-29 15:29:20.388842 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 15:29:20.388935 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 15:29:20.388945 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 15:29:20.388952 | orchestrator | | 11f4863b-fe1d-47bf-a397-849161f4aed6 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T15:29:12.000000 | 2025-08-29 15:29:20.388958 | orchestrator | | 88b9fdd2-3955-4ad3-a48d-b83d6cc0874d | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T15:29:15.000000 | 2025-08-29 15:29:20.388965 | orchestrator | | 626468e4-6808-47a8-83c0-2b9988983a9c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T15:29:12.000000 | 2025-08-29 15:29:20.388972 | orchestrator | | f6068323-14ac-4884-8998-1bf6962ebc2a | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-08-29T15:29:15.000000 | 2025-08-29 15:29:20.388978 | orchestrator | | 222fe6ca-a7c7-4eb7-911f-a92f72763290 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-08-29T15:29:16.000000 | 2025-08-29 15:29:20.388984 | orchestrator | | 3e7f462a-bb89-4a3c-892e-2b78203918da | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-08-29T15:29:18.000000 | 2025-08-29 15:29:20.388991 | orchestrator | | 44ca9294-5b86-40ac-b7bf-49e1c97cbf86 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-08-29T15:29:15.000000 | 2025-08-29 15:29:20.388997 | orchestrator | | 480c0c4a-70b1-4804-8cb3-04b6a154d3da | nova-compute | testbed-node-4 | nova | enabled | up | 2025-08-29T15:29:16.000000 | 2025-08-29 15:29:20.389003 | orchestrator | | 1b89fd01-5e9a-42e9-adba-e3b8e32da40e | nova-compute | testbed-node-3 | nova | enabled | up | 2025-08-29T15:29:16.000000 | 2025-08-29 15:29:20.389028 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 15:29:20.729613 | orchestrator | + openstack hypervisor list 2025-08-29 15:29:25.638425 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 15:29:25.638650 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-08-29 15:29:25.638668 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 15:29:25.638681 | orchestrator | | bd4c06a5-88a3-45c0-b2f0-124157cb6fc3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-08-29 15:29:25.638692 | orchestrator | | 7d664051-1242-4396-a0de-80ee4d554c7a | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-08-29 15:29:25.638702 | orchestrator | | bcd12a5c-aa56-4528-b5c9-6be22bfa5d5e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-08-29 15:29:25.638713 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 15:29:26.030445 | orchestrator | 2025-08-29 15:29:26.030578 | orchestrator | # Run OpenStack test play 2025-08-29 15:29:26.030605 | orchestrator | 2025-08-29 15:29:26.030625 | orchestrator | + echo 2025-08-29 15:29:26.030646 | orchestrator | + echo '# Run OpenStack test play' 2025-08-29 15:29:26.030666 | orchestrator | + echo 2025-08-29 15:29:26.030687 | orchestrator | + osism apply --environment openstack test 2025-08-29 15:29:27.980768 | orchestrator | 2025-08-29 15:29:27 | INFO  | Trying to run play test in environment openstack 2025-08-29 15:29:38.143178 | orchestrator | 2025-08-29 15:29:38 | INFO  | Task ff10b43a-6449-46fd-a880-935d6d1f2881 (test) was prepared for execution. 2025-08-29 15:29:38.143289 | orchestrator | 2025-08-29 15:29:38 | INFO  | It takes a moment until task ff10b43a-6449-46fd-a880-935d6d1f2881 (test) has been started and output is visible here. 2025-08-29 15:35:28.945730 | orchestrator | 2025-08-29 15:35:28.945882 | orchestrator | PLAY [Create test project] ***************************************************** 2025-08-29 15:35:28.945903 | orchestrator | 2025-08-29 15:35:28.945996 | orchestrator | TASK [Create test domain] ****************************************************** 2025-08-29 15:35:28.946010 | orchestrator | Friday 29 August 2025 15:29:42 +0000 (0:00:00.082) 0:00:00.082 ********* 2025-08-29 15:35:28.946078 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946090 | orchestrator | 2025-08-29 15:35:28.946101 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-08-29 15:35:28.946112 | orchestrator | Friday 29 August 2025 15:29:46 +0000 (0:00:03.971) 0:00:04.053 ********* 2025-08-29 15:35:28.946123 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946134 | orchestrator | 2025-08-29 15:35:28.946145 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-08-29 15:35:28.946156 | orchestrator | Friday 29 August 2025 15:29:50 +0000 (0:00:04.480) 0:00:08.534 ********* 2025-08-29 15:35:28.946166 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946177 | orchestrator | 2025-08-29 15:35:28.946188 | orchestrator | TASK [Create test project] ***************************************************** 2025-08-29 15:35:28.946199 | orchestrator | Friday 29 August 2025 15:29:57 +0000 (0:00:06.601) 0:00:15.135 ********* 2025-08-29 15:35:28.946209 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946220 | orchestrator | 2025-08-29 15:35:28.946231 | orchestrator | TASK [Create test user] ******************************************************** 2025-08-29 15:35:28.946242 | orchestrator | Friday 29 August 2025 15:30:01 +0000 (0:00:04.199) 0:00:19.335 ********* 2025-08-29 15:35:28.946252 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946263 | orchestrator | 2025-08-29 15:35:28.946275 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-08-29 15:35:28.946287 | orchestrator | Friday 29 August 2025 15:30:06 +0000 (0:00:04.269) 0:00:23.605 ********* 2025-08-29 15:35:28.946299 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-08-29 15:35:28.946312 | orchestrator | changed: [localhost] => (item=member) 2025-08-29 15:35:28.946325 | orchestrator | changed: [localhost] => (item=creator) 2025-08-29 15:35:28.946359 | orchestrator | 2025-08-29 15:35:28.946372 | orchestrator | TASK [Create test server group] ************************************************ 2025-08-29 15:35:28.946383 | orchestrator | Friday 29 August 2025 15:30:18 +0000 (0:00:12.051) 0:00:35.656 ********* 2025-08-29 15:35:28.946395 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946407 | orchestrator | 2025-08-29 15:35:28.946420 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-08-29 15:35:28.946431 | orchestrator | Friday 29 August 2025 15:30:22 +0000 (0:00:04.327) 0:00:39.983 ********* 2025-08-29 15:35:28.946444 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946455 | orchestrator | 2025-08-29 15:35:28.946467 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-08-29 15:35:28.946479 | orchestrator | Friday 29 August 2025 15:30:27 +0000 (0:00:05.133) 0:00:45.117 ********* 2025-08-29 15:35:28.946491 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946502 | orchestrator | 2025-08-29 15:35:28.946515 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-08-29 15:35:28.946527 | orchestrator | Friday 29 August 2025 15:30:31 +0000 (0:00:04.377) 0:00:49.495 ********* 2025-08-29 15:35:28.946539 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946551 | orchestrator | 2025-08-29 15:35:28.946563 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-08-29 15:35:28.946575 | orchestrator | Friday 29 August 2025 15:30:36 +0000 (0:00:04.414) 0:00:53.910 ********* 2025-08-29 15:35:28.946587 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946598 | orchestrator | 2025-08-29 15:35:28.946610 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-08-29 15:35:28.946623 | orchestrator | Friday 29 August 2025 15:30:40 +0000 (0:00:04.337) 0:00:58.247 ********* 2025-08-29 15:35:28.946634 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946645 | orchestrator | 2025-08-29 15:35:28.946656 | orchestrator | TASK [Create test network topology] ******************************************** 2025-08-29 15:35:28.946681 | orchestrator | Friday 29 August 2025 15:30:44 +0000 (0:00:04.144) 0:01:02.392 ********* 2025-08-29 15:35:28.946692 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.946702 | orchestrator | 2025-08-29 15:35:28.946713 | orchestrator | TASK [Create test instances] *************************************************** 2025-08-29 15:35:28.946723 | orchestrator | Friday 29 August 2025 15:30:58 +0000 (0:00:13.401) 0:01:15.793 ********* 2025-08-29 15:35:28.946734 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 15:35:28.946744 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 15:35:28.946755 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 15:35:28.946765 | orchestrator | 2025-08-29 15:35:28.946776 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 15:35:28.946786 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 15:35:28.946797 | orchestrator | 2025-08-29 15:35:28.946808 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 15:35:28.946818 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 15:35:28.946829 | orchestrator | 2025-08-29 15:35:28.946839 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-08-29 15:35:28.946849 | orchestrator | Friday 29 August 2025 15:34:08 +0000 (0:03:10.138) 0:04:25.931 ********* 2025-08-29 15:35:28.946860 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 15:35:28.946870 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 15:35:28.946881 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 15:35:28.946891 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 15:35:28.946902 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 15:35:28.946943 | orchestrator | 2025-08-29 15:35:28.946955 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-08-29 15:35:28.946965 | orchestrator | Friday 29 August 2025 15:34:31 +0000 (0:00:22.897) 0:04:48.829 ********* 2025-08-29 15:35:28.946976 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 15:35:28.946986 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 15:35:28.947005 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 15:35:28.947016 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 15:35:28.947045 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 15:35:28.947056 | orchestrator | 2025-08-29 15:35:28.947072 | orchestrator | TASK [Create test volume] ****************************************************** 2025-08-29 15:35:28.947083 | orchestrator | Friday 29 August 2025 15:35:03 +0000 (0:00:32.286) 0:05:21.116 ********* 2025-08-29 15:35:28.947093 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.947104 | orchestrator | 2025-08-29 15:35:28.947115 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-08-29 15:35:28.947125 | orchestrator | Friday 29 August 2025 15:35:10 +0000 (0:00:06.910) 0:05:28.026 ********* 2025-08-29 15:35:28.947136 | orchestrator | changed: [localhost] 2025-08-29 15:35:28.947146 | orchestrator | 2025-08-29 15:35:28.947157 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-08-29 15:35:28.947167 | orchestrator | Friday 29 August 2025 15:35:23 +0000 (0:00:13.241) 0:05:41.268 ********* 2025-08-29 15:35:28.947178 | orchestrator | ok: [localhost] 2025-08-29 15:35:28.947189 | orchestrator | 2025-08-29 15:35:28.947200 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-08-29 15:35:28.947210 | orchestrator | Friday 29 August 2025 15:35:28 +0000 (0:00:04.955) 0:05:46.223 ********* 2025-08-29 15:35:28.947220 | orchestrator | ok: [localhost] => { 2025-08-29 15:35:28.947231 | orchestrator |  "msg": "192.168.112.181" 2025-08-29 15:35:28.947242 | orchestrator | } 2025-08-29 15:35:28.947253 | orchestrator | 2025-08-29 15:35:28.947263 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:35:28.947275 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:35:28.947286 | orchestrator | 2025-08-29 15:35:28.947297 | orchestrator | 2025-08-29 15:35:28.947308 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:35:28.947318 | orchestrator | Friday 29 August 2025 15:35:28 +0000 (0:00:00.043) 0:05:46.267 ********* 2025-08-29 15:35:28.947329 | orchestrator | =============================================================================== 2025-08-29 15:35:28.947340 | orchestrator | Create test instances ------------------------------------------------- 190.14s 2025-08-29 15:35:28.947350 | orchestrator | Add tag to instances --------------------------------------------------- 32.29s 2025-08-29 15:35:28.947361 | orchestrator | Add metadata to instances ---------------------------------------------- 22.90s 2025-08-29 15:35:28.947371 | orchestrator | Create test network topology ------------------------------------------- 13.40s 2025-08-29 15:35:28.947382 | orchestrator | Attach test volume ----------------------------------------------------- 13.24s 2025-08-29 15:35:28.947393 | orchestrator | Add member roles to user test ------------------------------------------ 12.05s 2025-08-29 15:35:28.947403 | orchestrator | Create test volume ------------------------------------------------------ 6.91s 2025-08-29 15:35:28.947414 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.60s 2025-08-29 15:35:28.947424 | orchestrator | Create ssh security group ----------------------------------------------- 5.13s 2025-08-29 15:35:28.947435 | orchestrator | Create floating ip address ---------------------------------------------- 4.96s 2025-08-29 15:35:28.947446 | orchestrator | Create test-admin user -------------------------------------------------- 4.48s 2025-08-29 15:35:28.947456 | orchestrator | Create icmp security group ---------------------------------------------- 4.41s 2025-08-29 15:35:28.947467 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.38s 2025-08-29 15:35:28.947477 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.34s 2025-08-29 15:35:28.947488 | orchestrator | Create test server group ------------------------------------------------ 4.33s 2025-08-29 15:35:28.947499 | orchestrator | Create test user -------------------------------------------------------- 4.27s 2025-08-29 15:35:28.947517 | orchestrator | Create test project ----------------------------------------------------- 4.20s 2025-08-29 15:35:28.947527 | orchestrator | Create test keypair ----------------------------------------------------- 4.15s 2025-08-29 15:35:28.947538 | orchestrator | Create test domain ------------------------------------------------------ 3.97s 2025-08-29 15:35:28.947548 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-08-29 15:35:29.193298 | orchestrator | + server_list 2025-08-29 15:35:29.193397 | orchestrator | + openstack --os-cloud test server list 2025-08-29 15:35:32.838138 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 15:35:32.838249 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-08-29 15:35:32.838263 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 15:35:32.838274 | orchestrator | | fb40c4d9-2550-4522-8ec8-190df8eaa3d8 | test-4 | ACTIVE | auto_allocated_network=10.42.0.21, 192.168.112.161 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:35:32.838285 | orchestrator | | 8739d4dd-b454-428f-875f-bec896a1991f | test-3 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.184 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:35:32.838295 | orchestrator | | 1a835563-79d1-43b0-a23a-17b76289610d | test-2 | ACTIVE | auto_allocated_network=10.42.0.55, 192.168.112.102 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:35:32.838306 | orchestrator | | 505ce539-f28d-4abb-ab72-a81f3b9e721e | test-1 | ACTIVE | auto_allocated_network=10.42.0.53, 192.168.112.122 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:35:32.838339 | orchestrator | | a94c53f9-8f63-4af3-9e36-a0e886ec53c7 | test | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.181 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:35:32.838350 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 15:35:33.104844 | orchestrator | + openstack --os-cloud test server show test 2025-08-29 15:35:36.464022 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:36.464101 | orchestrator | | Field | Value | 2025-08-29 15:35:36.464110 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:36.464116 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:35:36.464121 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:35:36.464127 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:35:36.464147 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-08-29 15:35:36.464158 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:35:36.464164 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:35:36.464170 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:35:36.464175 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:35:36.464192 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:35:36.464198 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:35:36.464203 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:35:36.464209 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:35:36.464214 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:35:36.464224 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:35:36.464230 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:35:36.464238 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:31:27.000000 | 2025-08-29 15:35:36.464244 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:35:36.464249 | orchestrator | | accessIPv4 | | 2025-08-29 15:35:36.464254 | orchestrator | | accessIPv6 | | 2025-08-29 15:35:36.464260 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.181 | 2025-08-29 15:35:36.464269 | orchestrator | | config_drive | | 2025-08-29 15:35:36.464274 | orchestrator | | created | 2025-08-29T15:31:06Z | 2025-08-29 15:35:36.464280 | orchestrator | | description | None | 2025-08-29 15:35:36.464285 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:35:36.464295 | orchestrator | | hostId | 6e346405ee6f83e90cab7976de8220ab77bef1c48e5adf2e3c1a9921 | 2025-08-29 15:35:36.464301 | orchestrator | | host_status | None | 2025-08-29 15:35:36.464306 | orchestrator | | id | a94c53f9-8f63-4af3-9e36-a0e886ec53c7 | 2025-08-29 15:35:36.464315 | orchestrator | | image | Cirros 0.6.2 (41ef023b-6f58-4974-bb4f-04ba1edda57a) | 2025-08-29 15:35:36.464320 | orchestrator | | key_name | test | 2025-08-29 15:35:36.464326 | orchestrator | | locked | False | 2025-08-29 15:35:36.464331 | orchestrator | | locked_reason | None | 2025-08-29 15:35:36.464337 | orchestrator | | name | test | 2025-08-29 15:35:36.464345 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:35:36.464351 | orchestrator | | progress | 0 | 2025-08-29 15:35:36.464356 | orchestrator | | project_id | a3b04edbe7d045a1b8c0b6abcae7c3c5 | 2025-08-29 15:35:36.464366 | orchestrator | | properties | hostname='test' | 2025-08-29 15:35:36.464372 | orchestrator | | security_groups | name='icmp' | 2025-08-29 15:35:36.464377 | orchestrator | | | name='ssh' | 2025-08-29 15:35:36.464382 | orchestrator | | server_groups | None | 2025-08-29 15:35:36.464391 | orchestrator | | status | ACTIVE | 2025-08-29 15:35:36.464396 | orchestrator | | tags | test | 2025-08-29 15:35:36.464402 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:35:36.464407 | orchestrator | | updated | 2025-08-29T15:34:13Z | 2025-08-29 15:35:36.464415 | orchestrator | | user_id | dbfb2527aeb04a74b38ae8437cfd67f5 | 2025-08-29 15:35:36.464421 | orchestrator | | volumes_attached | delete_on_termination='False', id='9a2fa47a-da4f-4dc6-ac0f-c291095d2714' | 2025-08-29 15:35:36.468546 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:36.726202 | orchestrator | + openstack --os-cloud test server show test-1 2025-08-29 15:35:39.919536 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:39.919620 | orchestrator | | Field | Value | 2025-08-29 15:35:39.919630 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:39.919637 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:35:39.919644 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:35:39.919651 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:35:39.919658 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-08-29 15:35:39.919664 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:35:39.919671 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:35:39.919693 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:35:39.919717 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:35:39.919737 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:35:39.919744 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:35:39.919751 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:35:39.919757 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:35:39.919763 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:35:39.919773 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:35:39.919779 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:35:39.919786 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:32:10.000000 | 2025-08-29 15:35:39.919792 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:35:39.919799 | orchestrator | | accessIPv4 | | 2025-08-29 15:35:39.919812 | orchestrator | | accessIPv6 | | 2025-08-29 15:35:39.919823 | orchestrator | | addresses | auto_allocated_network=10.42.0.53, 192.168.112.122 | 2025-08-29 15:35:39.919836 | orchestrator | | config_drive | | 2025-08-29 15:35:39.919843 | orchestrator | | created | 2025-08-29T15:31:50Z | 2025-08-29 15:35:39.919849 | orchestrator | | description | None | 2025-08-29 15:35:39.919855 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:35:39.919865 | orchestrator | | hostId | 5bf1129d45fb601ccc7bee4df07a0c7bbbe785754db966c46461e2cc | 2025-08-29 15:35:39.919872 | orchestrator | | host_status | None | 2025-08-29 15:35:39.919878 | orchestrator | | id | 505ce539-f28d-4abb-ab72-a81f3b9e721e | 2025-08-29 15:35:39.919884 | orchestrator | | image | Cirros 0.6.2 (41ef023b-6f58-4974-bb4f-04ba1edda57a) | 2025-08-29 15:35:39.919890 | orchestrator | | key_name | test | 2025-08-29 15:35:39.919902 | orchestrator | | locked | False | 2025-08-29 15:35:39.919908 | orchestrator | | locked_reason | None | 2025-08-29 15:35:39.919915 | orchestrator | | name | test-1 | 2025-08-29 15:35:39.920034 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:35:39.920053 | orchestrator | | progress | 0 | 2025-08-29 15:35:39.920064 | orchestrator | | project_id | a3b04edbe7d045a1b8c0b6abcae7c3c5 | 2025-08-29 15:35:39.920071 | orchestrator | | properties | hostname='test-1' | 2025-08-29 15:35:39.920083 | orchestrator | | security_groups | name='icmp' | 2025-08-29 15:35:39.920090 | orchestrator | | | name='ssh' | 2025-08-29 15:35:39.920098 | orchestrator | | server_groups | None | 2025-08-29 15:35:39.920105 | orchestrator | | status | ACTIVE | 2025-08-29 15:35:39.920122 | orchestrator | | tags | test | 2025-08-29 15:35:39.920130 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:35:39.920137 | orchestrator | | updated | 2025-08-29T15:34:17Z | 2025-08-29 15:35:39.920149 | orchestrator | | user_id | dbfb2527aeb04a74b38ae8437cfd67f5 | 2025-08-29 15:35:39.920156 | orchestrator | | volumes_attached | | 2025-08-29 15:35:39.920765 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:40.291977 | orchestrator | + openstack --os-cloud test server show test-2 2025-08-29 15:35:43.438533 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:43.439393 | orchestrator | | Field | Value | 2025-08-29 15:35:43.439431 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:43.439441 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:35:43.439467 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:35:43.439474 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:35:43.439482 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-08-29 15:35:43.439489 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:35:43.439496 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:35:43.439504 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:35:43.439511 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:35:43.439536 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:35:43.439544 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:35:43.439556 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:35:43.439563 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:35:43.439576 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:35:43.439584 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:35:43.439591 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:35:43.439598 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:32:49.000000 | 2025-08-29 15:35:43.439606 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:35:43.439613 | orchestrator | | accessIPv4 | | 2025-08-29 15:35:43.439620 | orchestrator | | accessIPv6 | | 2025-08-29 15:35:43.439628 | orchestrator | | addresses | auto_allocated_network=10.42.0.55, 192.168.112.102 | 2025-08-29 15:35:43.439640 | orchestrator | | config_drive | | 2025-08-29 15:35:43.439647 | orchestrator | | created | 2025-08-29T15:32:27Z | 2025-08-29 15:35:43.439655 | orchestrator | | description | None | 2025-08-29 15:35:43.439667 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:35:43.439681 | orchestrator | | hostId | 64f069e4a45053907bb22b836d6679adf9c9944a2d841961de5d2926 | 2025-08-29 15:35:43.439688 | orchestrator | | host_status | None | 2025-08-29 15:35:43.439696 | orchestrator | | id | 1a835563-79d1-43b0-a23a-17b76289610d | 2025-08-29 15:35:43.439703 | orchestrator | | image | Cirros 0.6.2 (41ef023b-6f58-4974-bb4f-04ba1edda57a) | 2025-08-29 15:35:43.439710 | orchestrator | | key_name | test | 2025-08-29 15:35:43.439717 | orchestrator | | locked | False | 2025-08-29 15:35:43.439725 | orchestrator | | locked_reason | None | 2025-08-29 15:35:43.439732 | orchestrator | | name | test-2 | 2025-08-29 15:35:43.439744 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:35:43.439751 | orchestrator | | progress | 0 | 2025-08-29 15:35:43.439767 | orchestrator | | project_id | a3b04edbe7d045a1b8c0b6abcae7c3c5 | 2025-08-29 15:35:43.439774 | orchestrator | | properties | hostname='test-2' | 2025-08-29 15:35:43.439781 | orchestrator | | security_groups | name='icmp' | 2025-08-29 15:35:43.439789 | orchestrator | | | name='ssh' | 2025-08-29 15:35:43.439796 | orchestrator | | server_groups | None | 2025-08-29 15:35:43.439803 | orchestrator | | status | ACTIVE | 2025-08-29 15:35:43.439810 | orchestrator | | tags | test | 2025-08-29 15:35:43.439818 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:35:43.439825 | orchestrator | | updated | 2025-08-29T15:34:22Z | 2025-08-29 15:35:43.439836 | orchestrator | | user_id | dbfb2527aeb04a74b38ae8437cfd67f5 | 2025-08-29 15:35:43.439848 | orchestrator | | volumes_attached | | 2025-08-29 15:35:43.444497 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:43.724510 | orchestrator | + openstack --os-cloud test server show test-3 2025-08-29 15:35:46.732211 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:46.732318 | orchestrator | | Field | Value | 2025-08-29 15:35:46.732333 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:46.732345 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:35:46.732356 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:35:46.732368 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:35:46.732379 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-08-29 15:35:46.732390 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:35:46.732401 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:35:46.732436 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:35:46.732449 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:35:46.732494 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:35:46.732507 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:35:46.732519 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:35:46.732530 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:35:46.732542 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:35:46.732553 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:35:46.732564 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:35:46.732576 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:33:25.000000 | 2025-08-29 15:35:46.732587 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:35:46.732606 | orchestrator | | accessIPv4 | | 2025-08-29 15:35:46.732618 | orchestrator | | accessIPv6 | | 2025-08-29 15:35:46.732630 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.184 | 2025-08-29 15:35:46.732654 | orchestrator | | config_drive | | 2025-08-29 15:35:46.732666 | orchestrator | | created | 2025-08-29T15:33:10Z | 2025-08-29 15:35:46.732678 | orchestrator | | description | None | 2025-08-29 15:35:46.732690 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:35:46.732701 | orchestrator | | hostId | 6e346405ee6f83e90cab7976de8220ab77bef1c48e5adf2e3c1a9921 | 2025-08-29 15:35:46.732712 | orchestrator | | host_status | None | 2025-08-29 15:35:46.732723 | orchestrator | | id | 8739d4dd-b454-428f-875f-bec896a1991f | 2025-08-29 15:35:46.732750 | orchestrator | | image | Cirros 0.6.2 (41ef023b-6f58-4974-bb4f-04ba1edda57a) | 2025-08-29 15:35:46.732761 | orchestrator | | key_name | test | 2025-08-29 15:35:46.732772 | orchestrator | | locked | False | 2025-08-29 15:35:46.732783 | orchestrator | | locked_reason | None | 2025-08-29 15:35:46.732799 | orchestrator | | name | test-3 | 2025-08-29 15:35:46.732817 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:35:46.732829 | orchestrator | | progress | 0 | 2025-08-29 15:35:46.732841 | orchestrator | | project_id | a3b04edbe7d045a1b8c0b6abcae7c3c5 | 2025-08-29 15:35:46.732852 | orchestrator | | properties | hostname='test-3' | 2025-08-29 15:35:46.732864 | orchestrator | | security_groups | name='icmp' | 2025-08-29 15:35:46.732876 | orchestrator | | | name='ssh' | 2025-08-29 15:35:46.732896 | orchestrator | | server_groups | None | 2025-08-29 15:35:46.732908 | orchestrator | | status | ACTIVE | 2025-08-29 15:35:46.732919 | orchestrator | | tags | test | 2025-08-29 15:35:46.732930 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:35:46.732988 | orchestrator | | updated | 2025-08-29T15:34:26Z | 2025-08-29 15:35:46.733006 | orchestrator | | user_id | dbfb2527aeb04a74b38ae8437cfd67f5 | 2025-08-29 15:35:46.733018 | orchestrator | | volumes_attached | | 2025-08-29 15:35:46.737345 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:47.048475 | orchestrator | + openstack --os-cloud test server show test-4 2025-08-29 15:35:50.043111 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:50.043242 | orchestrator | | Field | Value | 2025-08-29 15:35:50.043259 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:50.043291 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:35:50.043316 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:35:50.043327 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:35:50.043339 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-08-29 15:35:50.043350 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:35:50.043365 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:35:50.043377 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:35:50.043388 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:35:50.043418 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:35:50.043430 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:35:50.043441 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:35:50.043461 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:35:50.043472 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:35:50.043483 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:35:50.043494 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:35:50.043505 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:33:58.000000 | 2025-08-29 15:35:50.043516 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:35:50.043532 | orchestrator | | accessIPv4 | | 2025-08-29 15:35:50.043543 | orchestrator | | accessIPv6 | | 2025-08-29 15:35:50.043554 | orchestrator | | addresses | auto_allocated_network=10.42.0.21, 192.168.112.161 | 2025-08-29 15:35:50.043572 | orchestrator | | config_drive | | 2025-08-29 15:35:50.043593 | orchestrator | | created | 2025-08-29T15:33:42Z | 2025-08-29 15:35:50.043606 | orchestrator | | description | None | 2025-08-29 15:35:50.043619 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:35:50.043632 | orchestrator | | hostId | 64f069e4a45053907bb22b836d6679adf9c9944a2d841961de5d2926 | 2025-08-29 15:35:50.043644 | orchestrator | | host_status | None | 2025-08-29 15:35:50.043656 | orchestrator | | id | fb40c4d9-2550-4522-8ec8-190df8eaa3d8 | 2025-08-29 15:35:50.043669 | orchestrator | | image | Cirros 0.6.2 (41ef023b-6f58-4974-bb4f-04ba1edda57a) | 2025-08-29 15:35:50.043681 | orchestrator | | key_name | test | 2025-08-29 15:35:50.043699 | orchestrator | | locked | False | 2025-08-29 15:35:50.043713 | orchestrator | | locked_reason | None | 2025-08-29 15:35:50.043726 | orchestrator | | name | test-4 | 2025-08-29 15:35:50.043754 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:35:50.043767 | orchestrator | | progress | 0 | 2025-08-29 15:35:50.043780 | orchestrator | | project_id | a3b04edbe7d045a1b8c0b6abcae7c3c5 | 2025-08-29 15:35:50.043790 | orchestrator | | properties | hostname='test-4' | 2025-08-29 15:35:50.043801 | orchestrator | | security_groups | name='icmp' | 2025-08-29 15:35:50.043812 | orchestrator | | | name='ssh' | 2025-08-29 15:35:50.043823 | orchestrator | | server_groups | None | 2025-08-29 15:35:50.043834 | orchestrator | | status | ACTIVE | 2025-08-29 15:35:50.043850 | orchestrator | | tags | test | 2025-08-29 15:35:50.043861 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:35:50.043872 | orchestrator | | updated | 2025-08-29T15:34:30Z | 2025-08-29 15:35:50.043895 | orchestrator | | user_id | dbfb2527aeb04a74b38ae8437cfd67f5 | 2025-08-29 15:35:50.043907 | orchestrator | | volumes_attached | | 2025-08-29 15:35:50.047253 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:35:50.379775 | orchestrator | + server_ping 2025-08-29 15:35:50.381150 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-08-29 15:35:50.381981 | orchestrator | ++ tr -d '\r' 2025-08-29 15:35:53.290599 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:35:53.290705 | orchestrator | + ping -c3 192.168.112.181 2025-08-29 15:35:53.304410 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-08-29 15:35:53.304501 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=5.83 ms 2025-08-29 15:35:54.301651 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.39 ms 2025-08-29 15:35:55.302759 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.56 ms 2025-08-29 15:35:55.302875 | orchestrator | 2025-08-29 15:35:55.302892 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-08-29 15:35:55.302906 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-08-29 15:35:55.302918 | orchestrator | rtt min/avg/max/mdev = 1.558/3.256/5.825/1.847 ms 2025-08-29 15:35:55.303223 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:35:55.303249 | orchestrator | + ping -c3 192.168.112.161 2025-08-29 15:35:55.314759 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2025-08-29 15:35:55.314860 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=6.42 ms 2025-08-29 15:35:56.312602 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.62 ms 2025-08-29 15:35:57.313658 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.61 ms 2025-08-29 15:35:57.313770 | orchestrator | 2025-08-29 15:35:57.313786 | orchestrator | --- 192.168.112.161 ping statistics --- 2025-08-29 15:35:57.313799 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-08-29 15:35:57.313809 | orchestrator | rtt min/avg/max/mdev = 1.610/3.549/6.419/2.070 ms 2025-08-29 15:35:57.314167 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:35:57.314190 | orchestrator | + ping -c3 192.168.112.184 2025-08-29 15:35:57.326279 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-08-29 15:35:57.326340 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=6.50 ms 2025-08-29 15:35:58.324480 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.67 ms 2025-08-29 15:35:59.327522 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.91 ms 2025-08-29 15:35:59.327641 | orchestrator | 2025-08-29 15:35:59.327664 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-08-29 15:35:59.327678 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2005ms 2025-08-29 15:35:59.327689 | orchestrator | rtt min/avg/max/mdev = 1.907/3.690/6.498/2.009 ms 2025-08-29 15:35:59.327701 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:35:59.327748 | orchestrator | + ping -c3 192.168.112.102 2025-08-29 15:35:59.341400 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2025-08-29 15:35:59.341489 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=9.28 ms 2025-08-29 15:36:00.336204 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.49 ms 2025-08-29 15:36:01.337589 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.96 ms 2025-08-29 15:36:01.337694 | orchestrator | 2025-08-29 15:36:01.337710 | orchestrator | --- 192.168.112.102 ping statistics --- 2025-08-29 15:36:01.337722 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 15:36:01.337757 | orchestrator | rtt min/avg/max/mdev = 1.955/4.573/9.276/3.332 ms 2025-08-29 15:36:01.338010 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:36:01.338078 | orchestrator | + ping -c3 192.168.112.122 2025-08-29 15:36:01.351791 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-08-29 15:36:01.351819 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=9.46 ms 2025-08-29 15:36:02.347280 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.67 ms 2025-08-29 15:36:03.347979 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.99 ms 2025-08-29 15:36:03.348177 | orchestrator | 2025-08-29 15:36:03.348197 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-08-29 15:36:03.348210 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 15:36:03.348221 | orchestrator | rtt min/avg/max/mdev = 1.990/4.704/9.459/3.373 ms 2025-08-29 15:36:03.348243 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 15:36:03.465944 | orchestrator | ok: Runtime: 0:11:38.039898 2025-08-29 15:36:03.505748 | 2025-08-29 15:36:03.505862 | TASK [Run tempest] 2025-08-29 15:36:04.040347 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:04.053300 | 2025-08-29 15:36:04.053528 | TASK [Check prometheus alert status] 2025-08-29 15:36:04.589651 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:04.593088 | 2025-08-29 15:36:04.593259 | PLAY RECAP 2025-08-29 15:36:04.593436 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-08-29 15:36:04.593506 | 2025-08-29 15:36:04.833534 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 15:36:04.835879 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:36:05.601745 | 2025-08-29 15:36:05.601912 | PLAY [Post output play] 2025-08-29 15:36:05.627246 | 2025-08-29 15:36:05.627463 | LOOP [stage-output : Register sources] 2025-08-29 15:36:05.699317 | 2025-08-29 15:36:05.699670 | TASK [stage-output : Check sudo] 2025-08-29 15:36:06.491582 | orchestrator | sudo: a password is required 2025-08-29 15:36:06.739536 | orchestrator | ok: Runtime: 0:00:00.010067 2025-08-29 15:36:06.759236 | 2025-08-29 15:36:06.759547 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 15:36:06.795904 | 2025-08-29 15:36:06.796163 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 15:36:06.856209 | orchestrator | ok 2025-08-29 15:36:06.864895 | 2025-08-29 15:36:06.865026 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 15:36:07.322531 | orchestrator | ok: "docs" 2025-08-29 15:36:07.323281 | 2025-08-29 15:36:07.552609 | orchestrator | ok: "artifacts" 2025-08-29 15:36:07.805965 | orchestrator | ok: "logs" 2025-08-29 15:36:07.819609 | 2025-08-29 15:36:07.819746 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 15:36:07.866951 | 2025-08-29 15:36:07.867232 | TASK [stage-output : Make all log files readable] 2025-08-29 15:36:08.149341 | orchestrator | ok 2025-08-29 15:36:08.158823 | 2025-08-29 15:36:08.158997 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 15:36:08.204637 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:08.220013 | 2025-08-29 15:36:08.220184 | TASK [stage-output : Discover log files for compression] 2025-08-29 15:36:08.245457 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:08.260299 | 2025-08-29 15:36:08.260474 | LOOP [stage-output : Archive everything from logs] 2025-08-29 15:36:08.307658 | 2025-08-29 15:36:08.307845 | PLAY [Post cleanup play] 2025-08-29 15:36:08.317031 | 2025-08-29 15:36:08.317144 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:36:08.386114 | orchestrator | ok 2025-08-29 15:36:08.399641 | 2025-08-29 15:36:08.399776 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:36:08.426214 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:08.440017 | 2025-08-29 15:36:08.440147 | TASK [Clean the cloud environment] 2025-08-29 15:36:12.973218 | orchestrator | 2025-08-29 15:36:12 - clean up servers 2025-08-29 15:36:13.793124 | orchestrator | 2025-08-29 15:36:13 - testbed-manager 2025-08-29 15:36:13.879850 | orchestrator | 2025-08-29 15:36:13 - testbed-node-3 2025-08-29 15:36:13.969538 | orchestrator | 2025-08-29 15:36:13 - testbed-node-4 2025-08-29 15:36:14.054611 | orchestrator | 2025-08-29 15:36:14 - testbed-node-1 2025-08-29 15:36:14.132529 | orchestrator | 2025-08-29 15:36:14 - testbed-node-5 2025-08-29 15:36:14.221425 | orchestrator | 2025-08-29 15:36:14 - testbed-node-0 2025-08-29 15:36:14.311625 | orchestrator | 2025-08-29 15:36:14 - testbed-node-2 2025-08-29 15:36:14.402353 | orchestrator | 2025-08-29 15:36:14 - clean up keypairs 2025-08-29 15:36:14.420437 | orchestrator | 2025-08-29 15:36:14 - testbed 2025-08-29 15:36:14.446082 | orchestrator | 2025-08-29 15:36:14 - wait for servers to be gone 2025-08-29 15:36:25.281088 | orchestrator | 2025-08-29 15:36:25 - clean up ports 2025-08-29 15:36:25.461468 | orchestrator | 2025-08-29 15:36:25 - 014628a5-6e97-47fa-9659-3aae10c4984c 2025-08-29 15:36:25.696616 | orchestrator | 2025-08-29 15:36:25 - 30bdfc66-3a33-4a4c-bc77-9e0d241dcd38 2025-08-29 15:36:26.193971 | orchestrator | 2025-08-29 15:36:26 - 63436677-c07a-445c-bd70-35bd742e0e23 2025-08-29 15:36:26.417897 | orchestrator | 2025-08-29 15:36:26 - 70680576-4fb9-424b-90d5-e4cbb344190a 2025-08-29 15:36:26.630648 | orchestrator | 2025-08-29 15:36:26 - 7a01ad33-1eed-41b7-bebf-4b74cf218ee4 2025-08-29 15:36:26.839486 | orchestrator | 2025-08-29 15:36:26 - a0e0e667-69db-47a2-94f6-152df566ce00 2025-08-29 15:36:27.039917 | orchestrator | 2025-08-29 15:36:27 - b2368927-4666-4a63-a0bc-e05c306dd44b 2025-08-29 15:36:27.247019 | orchestrator | 2025-08-29 15:36:27 - clean up volumes 2025-08-29 15:36:27.362360 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-4-node-base 2025-08-29 15:36:27.399324 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-1-node-base 2025-08-29 15:36:27.440110 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-3-node-base 2025-08-29 15:36:27.483731 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-2-node-base 2025-08-29 15:36:27.531909 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-0-node-base 2025-08-29 15:36:27.574585 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-manager-base 2025-08-29 15:36:27.617505 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-5-node-base 2025-08-29 15:36:27.659625 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-0-node-3 2025-08-29 15:36:27.703746 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-8-node-5 2025-08-29 15:36:27.746117 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-5-node-5 2025-08-29 15:36:27.787843 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-4-node-4 2025-08-29 15:36:27.830832 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-2-node-5 2025-08-29 15:36:27.870860 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-7-node-4 2025-08-29 15:36:27.912501 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-3-node-3 2025-08-29 15:36:27.950630 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-6-node-3 2025-08-29 15:36:27.989713 | orchestrator | 2025-08-29 15:36:27 - testbed-volume-1-node-4 2025-08-29 15:36:28.034174 | orchestrator | 2025-08-29 15:36:28 - disconnect routers 2025-08-29 15:36:28.168524 | orchestrator | 2025-08-29 15:36:28 - testbed 2025-08-29 15:36:29.151149 | orchestrator | 2025-08-29 15:36:29 - clean up subnets 2025-08-29 15:36:29.206735 | orchestrator | 2025-08-29 15:36:29 - subnet-testbed-management 2025-08-29 15:36:29.359929 | orchestrator | 2025-08-29 15:36:29 - clean up networks 2025-08-29 15:36:29.574423 | orchestrator | 2025-08-29 15:36:29 - net-testbed-management 2025-08-29 15:36:29.835551 | orchestrator | 2025-08-29 15:36:29 - clean up security groups 2025-08-29 15:36:29.874746 | orchestrator | 2025-08-29 15:36:29 - testbed-node 2025-08-29 15:36:29.991185 | orchestrator | 2025-08-29 15:36:29 - testbed-management 2025-08-29 15:36:30.103898 | orchestrator | 2025-08-29 15:36:30 - clean up floating ips 2025-08-29 15:36:30.140376 | orchestrator | 2025-08-29 15:36:30 - 81.163.193.54 2025-08-29 15:36:30.590311 | orchestrator | 2025-08-29 15:36:30 - clean up routers 2025-08-29 15:36:30.686324 | orchestrator | 2025-08-29 15:36:30 - testbed 2025-08-29 15:36:32.003115 | orchestrator | ok: Runtime: 0:00:22.786282 2025-08-29 15:36:32.007637 | 2025-08-29 15:36:32.007803 | PLAY RECAP 2025-08-29 15:36:32.007924 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 15:36:32.007974 | 2025-08-29 15:36:32.143897 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:36:32.146533 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:36:32.913333 | 2025-08-29 15:36:32.913540 | PLAY [Cleanup play] 2025-08-29 15:36:32.932381 | 2025-08-29 15:36:32.932542 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:36:33.000924 | orchestrator | ok 2025-08-29 15:36:33.010297 | 2025-08-29 15:36:33.010470 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:36:33.044905 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:33.057463 | 2025-08-29 15:36:33.057622 | TASK [Clean the cloud environment] 2025-08-29 15:36:34.187112 | orchestrator | 2025-08-29 15:36:34 - clean up servers 2025-08-29 15:36:34.652947 | orchestrator | 2025-08-29 15:36:34 - clean up keypairs 2025-08-29 15:36:34.668837 | orchestrator | 2025-08-29 15:36:34 - wait for servers to be gone 2025-08-29 15:36:34.704805 | orchestrator | 2025-08-29 15:36:34 - clean up ports 2025-08-29 15:36:34.784801 | orchestrator | 2025-08-29 15:36:34 - clean up volumes 2025-08-29 15:36:34.856947 | orchestrator | 2025-08-29 15:36:34 - disconnect routers 2025-08-29 15:36:34.881646 | orchestrator | 2025-08-29 15:36:34 - clean up subnets 2025-08-29 15:36:34.905884 | orchestrator | 2025-08-29 15:36:34 - clean up networks 2025-08-29 15:36:35.055151 | orchestrator | 2025-08-29 15:36:35 - clean up security groups 2025-08-29 15:36:35.091680 | orchestrator | 2025-08-29 15:36:35 - clean up floating ips 2025-08-29 15:36:35.115907 | orchestrator | 2025-08-29 15:36:35 - clean up routers 2025-08-29 15:36:35.601161 | orchestrator | ok: Runtime: 0:00:01.331414 2025-08-29 15:36:35.605331 | 2025-08-29 15:36:35.605534 | PLAY RECAP 2025-08-29 15:36:35.605679 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 15:36:35.605751 | 2025-08-29 15:36:35.745047 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:36:35.747720 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:36:36.483425 | 2025-08-29 15:36:36.483591 | PLAY [Base post-fetch] 2025-08-29 15:36:36.499132 | 2025-08-29 15:36:36.499265 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 15:36:36.565240 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:36.579805 | 2025-08-29 15:36:36.580021 | TASK [fetch-output : Set log path for single node] 2025-08-29 15:36:36.628188 | orchestrator | ok 2025-08-29 15:36:36.637321 | 2025-08-29 15:36:36.637475 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 15:36:37.152063 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/work/logs" 2025-08-29 15:36:37.424076 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/work/artifacts" 2025-08-29 15:36:37.705245 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2a234aeae003479eb1e4b9822ba3cdf0/work/docs" 2025-08-29 15:36:37.733054 | 2025-08-29 15:36:37.733298 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 15:36:38.684011 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:36:38.684613 | orchestrator | changed: All items complete 2025-08-29 15:36:38.684711 | 2025-08-29 15:36:39.422530 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:36:40.140133 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:36:40.172884 | 2025-08-29 15:36:40.173075 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 15:36:40.211701 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:40.214743 | orchestrator | skipping: Conditional result was False 2025-08-29 15:36:40.235795 | 2025-08-29 15:36:40.235941 | PLAY RECAP 2025-08-29 15:36:40.236020 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 15:36:40.236061 | 2025-08-29 15:36:40.368663 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:36:40.369682 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:36:41.112687 | 2025-08-29 15:36:41.113004 | PLAY [Base post] 2025-08-29 15:36:41.133859 | 2025-08-29 15:36:41.134003 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 15:36:42.124868 | orchestrator | changed 2025-08-29 15:36:42.134731 | 2025-08-29 15:36:42.134918 | PLAY RECAP 2025-08-29 15:36:42.135005 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 15:36:42.135092 | 2025-08-29 15:36:42.263784 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:36:42.264843 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 15:36:43.057476 | 2025-08-29 15:36:43.057670 | PLAY [Base post-logs] 2025-08-29 15:36:43.069006 | 2025-08-29 15:36:43.069146 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 15:36:43.566351 | localhost | changed 2025-08-29 15:36:43.582126 | 2025-08-29 15:36:43.582300 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 15:36:43.612911 | localhost | ok 2025-08-29 15:36:43.618206 | 2025-08-29 15:36:43.618354 | TASK [Set zuul-log-path fact] 2025-08-29 15:36:43.637212 | localhost | ok 2025-08-29 15:36:43.651073 | 2025-08-29 15:36:43.651222 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 15:36:43.692976 | localhost | ok 2025-08-29 15:36:43.697766 | 2025-08-29 15:36:43.697899 | TASK [upload-logs : Create log directories] 2025-08-29 15:36:44.296853 | localhost | changed 2025-08-29 15:36:44.299941 | 2025-08-29 15:36:44.300058 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 15:36:44.831397 | localhost -> localhost | ok: Runtime: 0:00:00.007374 2025-08-29 15:36:44.839739 | 2025-08-29 15:36:44.839933 | TASK [upload-logs : Upload logs to log server] 2025-08-29 15:36:45.414932 | localhost | Output suppressed because no_log was given 2025-08-29 15:36:45.417264 | 2025-08-29 15:36:45.417392 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 15:36:45.478584 | localhost | skipping: Conditional result was False 2025-08-29 15:36:45.483539 | localhost | skipping: Conditional result was False 2025-08-29 15:36:45.498111 | 2025-08-29 15:36:45.498355 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 15:36:45.549127 | localhost | skipping: Conditional result was False 2025-08-29 15:36:45.549712 | 2025-08-29 15:36:45.553247 | localhost | skipping: Conditional result was False 2025-08-29 15:36:45.560616 | 2025-08-29 15:36:45.560855 | LOOP [upload-logs : Upload console log and json output]